Thank Don. That is a nicer explanation than the brief version I provided earlier

. Cable companies did not used to use stat-muxing and variable coding but once they started to stuff a lot of channels into your home, the game was on.
On #5, I have been behind writing my primer on video compression. I wrote this intro article for Widescreen Review Magazine but never the second part that explained the basics of how video compression works:
http://www.madronadigital.com/Library/Video Compression.html
For now, here is a very, very brief primer.
Video compression works by taking the initial frame of video and compressing it like you would with JPEG compression in your camera or computer. The image is divided into blocks and each compressed separately by reducing the amount of bits allocated to different "frequencies" (sharpness level) of the frame. Compress the block too much and it will then look like a square and hence the "blocking" artifacts you see when the camera pans in a sports game. MPEG-2 has fixed size blocks. VC-1 and MPEG-4 AVC have variable sizes.
Once you have the initial frame compressed, then the system attempts to compress the subsequent frames by computing the difference between the two. This done by tracking what happens to each of the above blocks. If I just move my head, the signal is sent to the receiver to simply shift those blocks. The frame is NOT transmitted and we save a ton of data. This is called "motion estimation" in that the system attempts to predict which way the blocks are moving.
Of course, real life is not that simple as the world is not made of blocks that nicely move in one direction or the other. If the new frame looks too different, then we switch to the initial mode above and send the entire frame (compressed). This takes a ton of data relative to just sending the movement of blocks. This will likely blow our bandwidth budget. The solution is to compress the full frame even more, causing its blocks to show up more. Hence the reason I used the above camera panning example.
MPEG compression as you read in my article, is the oldest scheme for compressing video. We are stuck with it as a legacy system which ironically costs 10X to license than the newer, better compression schemes! To give you an example, VC-1 (which my group at Microsoft developed

) has both fade to black and flash detection. If someone takes a picture of the crowd in a fashion show and the lighting becomes much brighter for a fraction of a second, MPEG-2 thinks the entire frame has changed and retransmits it all even though all that is changed is the brightness of each pixel. Result is that you see a ton of compression artifacts during that time whereas with VC-1, the image holds is quality. Ditto for when the video goes black. With MPEG-2 you see a lot of blocking artifacts whereas VC-1 is smooth and nice.
To catch up, MPEG-2 companies have put in all kinds of tricks such as softening the video when the codec gets in trouble. Less detail means the video takes less bits to compress for equiv. quality. To most people, a softer picture is better than a sharper one with artifacts. Next time you are watching sports, look at what happens when the camera chases a (American) football player or hockey. Often, you see the image get soft but as soon as the motion calms down, the picture becomes sharp again (note: this also occurs due to display response being too slow such as in older LCDs).
Going back to your point #5, the feeds that are sent to companies like comcast is lightly compressed. Different schemes are used depending on how much money one wants to pay for bandwidth. In general, it is true that some quality is lost in that "uplink" and we have a double-compression situation where the video is compressed yet again when it is delivered to consumers.
I better stop here and see if I have lost all of you or just some

.