SSC logo Scottish Sensory Centre
university of edinburgh
 

Adapting Video for VI Learners

Compression and Multimedia

Compression, digital storage, and multimedia

Video and audio use a lot of 'bandwidth'. That is, video uses a lot of storage space, and it takes a large amount of the available airwaves to broadcast a very few TV channels. Hence the delay in developing such things as video-phones, and in offering video on more flexible storage media than magnetic tape.

The key to being able to broadcast more channels, to sending video over slower links, and to storing it on the likes of CD's, lies in converting the data to digital form so that the content can be compressed. Much digital information is treated this way - for instance, redundancy in text can be exploited to reduce disc storage needs. However, there are limits: you have to ensure that decompression is complete (or 'lossless'). No-one wants half their bank records lost in decompression.

The trick in dealing with video and graphical information is to recognise that you don't need to be 100% accurate when reconstituting the picture from its compressed form. The eye is forgiving, and an approximation will do. You can, for example, calculate which parts of a moving image have changed, and just store (or transmit) the major changes. Or you can lose some of the fine detail to save space. How much of an approximation depends on the material, and how fast it is changing, but the use of 'lossy' compression means that video clips can be saved on CD-ROM, and whole video's on the DVD format.

There are many techniques for compressing audio, video and pictures, each aimed at particular tasks. What works well for animation doesn't work well for photographs. Moving images are tackled differently to static ones. However, TV and multimedia industries are built on standards, not infinite diversity. Some are now emerging. The Motion Picture Experts Group have defined a number of standards for video compression which particularly suit TV and DVD programmes, while for multimedia applications Microsoft created AVI and Apple developed Quicktime. There are others.

Why should you care about the standards?

Fascinating though these breakthroughs are, they would be of academic interest to us if the compression standards specified just how a single stream of video was to be handled. However, they now go far beyond that, and typically have grown to include specifications about how video data are to be interleaved with other material, and how such combinations can be optimised for best use in broadcast, stored-video, and Internet environments. As a result, these standards come to define what programme makers can include in the way of multiple streams of information, and how these can be made interactive. All of which makes the standards much more pertinent to those of us who want to create more accessible learning materials.

For instance, MPEG-2 has an extended standard dealing with complete programme delivery systems. This says that there must be video (compressed in certain ways). It also stipulates that there should be multiple audio channels: five full bandwidth channels (left, right, centre, and two surround channels), plus an additional low frequency enhancement channel. At the time of writing the standards group were also considering up to seven commentary/multilingual channels, user selectable (for multiple languages or censored expletives, but the special needs use is obvious). MPEG-2 also has the ability to carry a stream of limited-colour still pictures - up to 32 at a time - which can be overlaid on the video (if the programme maker supplies them and the viewer selects them); these can be used as text tracks, for animation, or for titles.

Apple's Quicktime is developing in the same way. Initially just a way of compressing video and audio for multimedia, it now allows for multiple video, audio, text, animation and picture tracks (with the text tracks searchable).

Historically, each addressed different primary markets, but with overlaps. MPEG deals with broadcast TV; digital videotape; DVD; and possibly with Digital-VHS, whilst Quicktime and AVI started life with multimedia CD-ROM in mind. However, as the standards grow, so do the application areas: all three systems are used in Internet transmission of video, and each can be used on CD and DVD-ROM (i.e. DVD used as a large capacity version of a CD-ROM as opposed to a dedicated, industry standard video-disc). The process of adaptation continues: in its MPEG-4 phase of work the standards committee aims to support manipulation of the content of audio-visual data delivered over slow links, for application in diverse areas such as entertainment, distance learning, remote monitoring, and home shopping. Future versions of Quicktime will include support for similar interactive features.

Summary

To summarise: adding extra support for disabled learners means adding extra channels of information, breaking free of linear presentation metaphors to make space for supplementary information, and using interactive features to control all this. The compression standard you are forced to use sets your degrees of freedoms to do this, particularly for broadcast TV, DVD and D-VHS. This is because the standard imposes limits on what extra channels there are, and how you can present them. If you need more freedom than MPEG-2 allows, then Multimedia or Web approaches, (or some combination of them) are your best delivery choice. Even here though, AV compression standards will underpin and define the boundaries of your efforts.

 
SSC