- 60GHz Unlicensed Wireless
- 802.11x – 802.11b – 802.11g – 802.11n
- Applications – Industries
- AV Technology
- Broadcast Engineering
- Content Communications World – CCW NAB
- Conversion and Converters
- Convert 3G HD SDI to HDMI
- Dirac Pro
- Editorial Coverage
- Educational Guides
- Ensemble Designs
- Fiber Optic Medium
- Fiber Optic Transport
- Government and Military
- GSN – GOVERNMENT SECURITY NEWS
- Haute Spot
- Introduction to Fiber Optics
- Jim Jachetta
- Job Listings – Help Wanted
- Leadership – Management
- Market Research
- Military – Government
- MPEG-2 Basic Training
- MPEG-4 H.264
- NAB Show
- NHL – National Hockey League
- Optical Windows and Spectrum
- Partner Profile
- Press Release
- Professional AV – Pro AV
- Routing – Distribution
- Satnews Daily
- Snells Law
- Speaking Event
- Sports Video Group – SVG
- Storage – Archive
- Trade Shows
- TV Technology
- Types of Fiber-optic Material
- Users Guide to Fiber Optic Video Transmission
- Users Guides
- Video Compression
- Video Networking – Enterprise IPTV
- Video over Cellular
- Video Streaming – Webcasting
- Video Streaming – Webcasting
- VidOvation Video Report and Newsletter
- Windows Media
- Wireless Video
Category Archives: Broadcast Engineering
Link Distance – Wireless Video Selection Criteria
The total distance that can be covered between endpoints in a wireless link is affected by a combination of factors including frequency, antenna geometry, interference, and obstructions. These factors make precise distance calculations extremely dependent on local environments. However, some general rules can be defined to help guide technology selection.
Rule 1) Lower frequency bands support greater transmission distances, and are less sensitive to signal path obstructions. But low frequency bands are more likely to be restricted by the FCC (or other national authorities) to narrow channel bandwidths and hence limited bit rates.
Rule 2) More complex modulation schemes (such as 16QAM as compared to QPSK) that deliver more bits in a given channel bandwidth require greater signal to noise ratios to deliver an acceptable error rate. Other things being equal, shorter usable link distance limits will apply for more complex modulation.
Rule 3) Narrow-beam antennas produce higher gains than wide-beam ones, thereby permitting longer link distances to be used. Omnidirectional antennas having much shorter ranges than either panel or parabolic antennas.
Rule 4) Greater levels of interfering signals will reduce usable link distances due to a reduction in signal to noise ratio. Interference can come from many sources, including other equipment occupying the same frequencies nearby and consumer devices such as microwave ovens that emit RF energy. In general, heavily populated areas have much more ambient interference than rural environments.
Rule 5) Path obstructions, including buildings, power lines and trees or other vegetation will attenuate wireless signals and reduce usable range. High frequency signals tend to suffer greater attenuation than low frequency signals for a given obstacle. Extremely high frequency signals may only work if there is a clear line of sight from the transmitter to the receiver.
SDI Over Ethernet
Serial digital interface, more commonly referred to as SDI, is a type of interface that is used to transmit digital-grade video images. It was originally and most prevalently used by the movie and television industry due to its broadcast-quality image capabilities. However, as the trend toward streaming video and video downloads continues to build steam, there is an increasing demand for businesses and consumers alike to use SDI over Ethernet. While this technology was not available years ago, it is now available. More than that, it is becoming increasingly more common and more affordable to use, and this means that it may now be able to be used by you in your productions. However, there is a need to find the right solutions to use when implementing this technology in your office or other environment. See the Introductory Guide to IPTV and Video Networking.
Different Capabilities and Features to Look For
The capabilities of SDI over Ethernet vary from one provider to another, and you may consider researching the options thoroughly before you decide which option to use for your needs. For example, one option streams data in a bi-directional manner, and it works with SD, SDI, ASI and even HD technology. The speed of the service also can vary, and some capabilities are more suitable for shorter distance transport rather than longer distance transport.
Superior Digital Encoders
In addition to having the network in place to support SDI over Ethernet, there is also a need to invest in digital encoders. There are several encoders available for you to purchase, but superior encoders have dual input capabilities to support two simulcast channels at the same time. They may have lower cost viewer options, high resolution functions and other enhanced features. While there are multiple options available, the quality and cost of the technology is imperative. Through VidOvation, you can most easily find the highest quality encoders to use with SDI over ethernet at the best value available.
VidOvation Showcases New Product Additions for its Broadcast, Corporate, and Government AV Product Lines at NAB 2013
For Immediate Release
VidOvation Showcases New Product Additions for its Broadcast, Corporate, and Government AV Product Lines at NAB 2013
Innovative IPTV, Webcasting, 60GHz wireless HD-SDI, and Fiber Optic Transport systems delivers broadcast quality audio/video at 1/3 the cost
Las Vegas, Nevada, April 3, 2013 – NAB Booth #: N1307 – VidOvation, a leading technology provider of video and data communication systems to the broadcast television, sports, corporate audio-visual, and government markets, announced today that they will debut at NAB, as part of their expanded business strategy, a new line of product families for IPTV, Webcasting and Fiber Optic Transport designed to increase system performance and flexibility while dramatically reducing overall acquisition, implementation and support costs. The new product families include:
1) The VidLink II-5 60GHz wireless video link system can be used in ad hoc or permanent installations and features zero frame delay, zero interference, uncompressed 1.5G HD-SDI video, 270Mb/s SD and DVB ASI, globally allocated unlicensed 60 GHz spectrum, no channel coding, ruggedized and water resistant enclosure, water resistant connections, and range for HD of 500m and SD greater than 1000m. Continue reading
The House of Lords, the UK’s second legislative chamber, has called for a wholesale switch from digital terrestrial to IPTV.
Original Story from Broadcast Engineering – http://broadcastengineering.com/news/uk-s-lords-calls-iptv
Aug. 13, 2012 11:19am
The UK’s House of Lords has put its weight behind a wholesale switch from digital terrestrial to broadband for public service broadcast delivery, even though this might in the short term threaten universal access provision.
The House of Lords is the UK’s second legislative chamber whose main function is to revise primary legislation emerging from the House of Commons comprising elected members of parliament (MPs). The House of Lords’ Communications committee has now proposed that broadcasting be moved away from terrestrial delivery towards IPTV and OTT over the Internet as the primary means of distribution. This recommendation is made in the report “Broadband for all,” which sets out an alternative vision to that of the government. The government wants to retain digital terrestrial as the base medium for delivery of services from the country’s Free To Air broadcasters including the BBC and ITV.
Part of this government strategy is to roll out superfast broadband across the country to provide the basis for future multichannel HD services. But surprisingly, the House of Lords questions this, suggesting the nation’s interests might be better served by first ensuring universal access. The problem with this is that such universal access would be at speeds of 2Mb/s at best given current technology, without substantial fiber deployment in remote areas at great cost.
Therefore, the country will probably be better served for universal access in the short term by continuing with digital terrestrial, and in the long term by pursuing superfast broadband. Because of this, the House of Lords report seems to fall between two stools.
Original content from Transition to Digital Newsletter, October 16, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-10162011
One of the advantages of digital video is that it can be compressed and transported in an MPEG stream across an IP network. MPEG compressed digital video requires new sets of test tools and troubleshooting skills using bit stream monitoring and testing to accurately identify problems, or preferably, recognize and identify potential problems before they occur.
The MPEG-2 standard is defined by ISO/IEC 13818 as “the generic coding of moving pictures and associated audio information.” It combines lossy video compression and lossy audio data compression to fulfill bandwidth requirements. The foundation of all MPEG compression systems is asymmetric because the encoder is more sophisticated than the decoder.
MPEG encoders are always algorithmic. Some are also adaptive, using a feedback path. MPEG decoders are not adaptive and perform a fixed function. This works well for applications like broadcasting, where the number of expensive complex encoders is few and the number of simple inexpensive decoders is huge.
The MPEG standards provide little information about encoder process and operation. Rather, it specifically defines how a decoder interprets metadata in a bit stream. MPEG metadata tells the decoder what rate video was encoded at, and it defines the audio coding, channels and other vital stream information.
A decoder that successfully deciphers MPEG streams is called compliant. The genius of MPEG is that it allows different encoder designs to evolve simultaneously. Generic low-cost and proprietary high-performance encoders and encoding schemes all work because they are all designed to talk to compliant decoders.
Asychronous Serial Interface (ASI) is a serial interface signal where a start bit is sent before each byte, and a stop signal is sent after each byte. This type of start-stop communication without the use of synchronized fixed time intervals was patented in 1916 and the key technology making teletype machines possible. Today, an ASI signal is often the final product of MPEG video compression, ready for transmission to a transmitter, microwave or fiber. Unlike uncompressed SDI, an ASI signal can carry one or multiple compressed SD, HD or audio streams. ASI transmission speeds are variable and depend on the user’s requirements.
There are two transmission formats used by the ASI interface, a 188-byte format and a 204-byte format. The 188-byte format is the more common. If Reed-Solomon error correction data is included, the packet can grow an extra 16 bytes to 204 bytes total.
What’s the purpose of a general-purpose oscilloscope (GPO) in troubleshooting MPEG? Not much. Specialized technology demands specialized test gear. MPEG steams are complicated, and MPEG-2 streams are more so. Examining MPEG-2 streams is reminiscent of measuring the front porch or counting the number of sync pulse serrations to manually validate an analog video sync pulse. Well, kind of, anyway.
An MPEG-2 stream can be either an elementary stream (ES), a packetized elementary stream (PES) or a transport stream (TS). The ES and PES are files.
Starting with analog video and audio content, individual ESs are created by applying MPEG-2 compression algorithms to the source content in the MPEG-2 encoder. This process is typically called ingest. The encoder creates an individual compressed ES for each audio and video stream. An optimally functioning encoder will look transparent when decoded in a set-top box and displayed on a professional video monitor for technical inspection.
A good ES depends on several factors, such as the quality of the original source material, and the care used in monitoring and controlling audio and video variables upon ingest. The better the baseband signal, the better the quality of the digital file. Also influencing ES quality is the encoded stream bit rate, and how well the encoder applies its MPEG-2 compression algorithms within the allowable bit rate.
MPEG-2 has two main compression components: intraframe spatial compression and interframe motion compression. Encoders use various techniques, some proprietary, to maintain the maximum allowed bit rate while at the same time allocating bits to both compression components. This balancing act can sometimes be unsuccessful. It is a tradeoff between allocating bits for detail in a single frame and bits to represent the changes (motion) from frame to frame. Which is more important?
Researchers are currently investigating what constitutes a good picture. Presently, there is no direct correlation between the data in the ES and subjective picture quality. For now, the only way of checking encoding quality is with the human eye, after decoding.
Figure 1. The transport stream is defined by the syntax and structure of the TS header. Courtesy DVEO Pro Broadcast Division. Click on image to enlarge.
The packetized elementary stream
Individual ESs are essentially endless because the length of an ES is as long as the program itself. Each ES is broken into variable-length packets to create a PES, which contains a header and payload bytes.
The PES header is data about the encoding process the MPEG decoder needs to successfully decompress the ES. Each individual ES results in an individual PES. At this point, audio and video information still reside in separate PESs. The PES is primarily a logical construct and is not really intended to be used for interchange, transport and interoperability. The PES also serves as a common conversion point between TSs and PSs (covered below).
Both the TS and PS are formed by packetizing PES files. During the formation of the TS, additional packets containing tables needed to demultiplex the TS are inserted. These tables are collectively called PSI and will be addressed in detail in a moment. Null packets, containing a dummy payload, may also be inserted to fill the intervals between information-bearing packets. Some packets contain timing information for their associated program, called the program clock reference (PCR). The PCR is inserted into one of the optional header fields of the TS packet. Recovery of the PCR allows the decoder to synchronize its clock to the rate of the original encoder clock.
TS packets are fixed in length at 188 bytes with a minimum 4-byte header and a maximum 184-byte payload. The structure of the TS header is shown in Figure 1. The key fields in the minimum 4-byte header are the sync byte and the packet ID (PID). The sync byte’s function is indicated by its name. It is a long digital word used for delineating the beginning of a TS packet.
The PID is a unique address identifier. Every video and audio stream, as well as each PSI table, needs to have a unique PID. The PID value is provisioned in the MPEG multiplexing equipment. Certain PID values are reserved. Important reserved PID values are indicated in the table below. Other reserved PID values are specified by organizations such as the Digital Video Broadcasting Group (DVB) and the Advanced Television Systems Committee (ATSC) for electronic program guides and other table.
In order to reconstruct a program from all its video, audio and table components, it is necessary to ensure that the PID assignment is done correctly and that there is consistency between PSI table contents and the associated video and audio streams. This is one of the main testing issues in MPEG and will be the focus of the next “Transition to Digital” newsletter tutorial.
Note: The author would like to thank Les Zoltan at DVEO Pro Broadcast Division for his help in the preparation of this tutorial.