TSReader |
Understanding TSReader and MPEG-TS |
Before we get to TSReader, let's go over MPEG-2 and it's transport stream model so the reader is sure of a common knowledge reference. Important terms are italicised.
Video
An video encoder takes video frames and compresses them by using a number of mathmatical techniques. These include transmitting only the difference between frames [etc].
There are currently four types of MPEG video compression used by broadcasters. MPEG-1 was the first standard and is rarely used these days given its inefficiences. MPEG-2 is now well over a decade old, but is by far the most popular video compression algorithm and is used around the world. MPEG-4 offers only slightly better performance than MPEG-2 for broadcasting and is mostly used by digital cameras and telephones, although a few IPTV networks use it. MPEG-4.10 (also known as H.264) offers up to a 50% bitrate reduction over MPEG-2 and it's usage is rapidly growing worldwide - it's used for HD video compression in both the USA and Europe and will be heavily used by the IPTV community.
The output of the video encoder is a stream of bits which when passed into a decoder as described by the appropriate MPEG specification will result in the reconstruction of the input frame, abeit with noise added as a result of the compression - the noise is a direct result of the bitrate and scene complexity. This output is called an Elementary Steam.
Audio
An audio encoder works primarily by reducing audio information [etc].
As in the video realm, there are a number of different audio compression algorisms. MPEG-1 offers three levels of complexity. The first, Level 1 is probably unused as we've never encountered it. Level 2 is the standard for almost all terrestrial, satellite and cable networks around the world. Level 3 has become very popular in the portable music space, but generally isn't used by broadcasters. The MPEG-2 audio standard added support for more channels than stereo (surround sound) and also some new compression techniques that are no longer backward compatible with MPEG-1 audio such as AAC (Advanced Audio Coding). In the broadcast world MPEG-2 audio hasn't caught on.
In addition to the MPEG standards, there are a few other audio compression systems used by broadcasters. Dolby's "Dolby Digital" system (known as AC3 in the broadcast world) is popular for movie networks and for HD. A few movie networks in Europe also use the Digital Theater System (DTS) which offers better sound quality than AC3, abeit at higher bitrates.
Like a video encoder, the output of an audio encoder is an Elementary Stream.
Multiplex Part 1
In analog TV systems, the video and audio are sent simultaneously as seperate entities within the channel. In fact, the color information is also sent seperately in the analog world: The main video carrier contains the luminance part of the picture (black and white), the color subcarrier has the red and blue part of the picture (the green is figured out by the receiver) and then there's the seperate audio carriers with mono and stereo audio.
If we were to use the same methodology for digital TV, there would need to be two receivers inside the receiver - one to extract the video bitstream and another for audio. Since the stream is already digital and one tuner costs less than two, the video and audio streams are multiplexed together to form a single stream.
The first step in creating a multiplex is seperating the Elementary Stream into chunks (usually a fixed number of bytes). These chunks are called a Packetized Elementary Stream, refered to most commonly as PES. In addition to chunks of the Elementary Stream, the PES contains timestamps so the decoder knows when to decode/display a video frame (or part of the audio stream).