The National Archives
Search our website
  • Search our website
  • Search our records
   
 
Image of software box and CD PRONOM
Welcome (PRONOM  home page) About PRONOM Add an entry
Search Help - opens in a new window Information resources - opens in a new window
 
 
 

*Details: File format summary

   
 

 

Search by keyword Search by file format Search by PUID Search by software Search by vendor Search by lifecycles Search by Migration Pathway

Details for:

Save as... XML | CSV Printer friendly version
 
Go to:  Summary  |  Documentation  |  Signatures  |  Compression |  Character encoding  |  Rights  |  Reference files   Properties  
 

Compression Types

Baseline JPEG  
Other names Discrete Cosine Transform, DCT, JPEG
Identifier(s) PUID:  x-cmp/11
Family  
Description The JPEG compression algorithm was developed in 1990 by the Joint Photographic Experts Group of ISO and CCITT, for the transmission of colour and greyscale images. It is a lossy technique which provides best compression rates with complex 24-bit (True Colour) images. It achieves its effect by discarding image data which is imperceptible to the human eye, using a technique called Discrete Cosine Transform (DCT). It then applies Huffman encoding to achieve further compression. The JPEG specification allows users to set the degree of compression, using an abstract Quality Setting. This provides a trade-off between compression rate and image quality. It is important to note that the Quality Setting is not an absolute value, with different JPEG encoders use different scales, and that even the maximum quality setting for baseline JPEG involves some loss. JPEG compression is most commonly used in the JPEG File Interchange Format (JFIF), SPIFF and TIFF.
Lossiness Lossy
Released 01 Jan 1994
Withdrawn  
Developed by International Standards Organisation
Supported by
Documentation ISO/IEC 10918-1: 1994, Information technology - Digital compression and coding of continuous-tone still images: Requirements and guidelines
Rights  
Note  
Lempel-Ziv-Welch  
Other names LZW
Identifier(s) PUID:  x-cmp/12
Family  
Description The Lempel-Ziv-Welch compression algorithm was developed by Terry Welch in 1984, as a modification of the LZ78 compressor. It is a lossless technique which can be applied to almost any type of data, but is most commonly used for image compression. LZW compression is effective on images with colour depths from 1-bit (monochrome) to 24-bit (True Colour). LZW compression is encountered in a range of common graphics file formats, including TIFF and GIF.
Lossiness Lossless
Released 01 Jan 1984
Withdrawn  
Developed by T A Welch / [No organisation specified]
Supported by
Documentation Welch, T A, 1984, A technique for high performance data compression, IEEE Computer, 17: 6
Rights  
Note  
Run Length Encoding  
Other names RLE
Identifier(s) PUID:  x-cmp/13
Family  
Description Run length encoding (RLE) is perhaps the simplest image compression technique in common use. RLE algorithms are lossless, and work by searching for runs of bits, bytes, or pixels of the same value, and encoding the length and value of the run. As such, RLE achieves best results with images containing large areas of contiguous colour, and especially monochrome images. Complex colour images, such as photographs, do not compress well – in some cases, RLE can actually increase the file size. There are a number of RLE variants in common use, which are encountered in the TIFF, PCX and BMP graphics formats.
Lossiness Lossless
Released  
Withdrawn  
Developed by
Supported by
Documentation  
Rights  
Note  
CCITT T.4  
Other names CCITT Group 3
Identifier(s) PUID:  x-cmp/14
Family  
Description Officially known as CCITT T.4, Group 3 is a compression algorithm developed by the International Telegraph and Telephone Consultative Committee in 1985 for encoding and compressing 1-bit (monochrome) image data. Its primary use has been in fax transmission, and it is optimised for scanned printed or handwritten documents. Group 3 is a lossless algorithm, of which two forms exist: one-dimensional (which is a modified version of Huffman encoding) and two-dimensional, which offers superior compression rates. Due to its origin as a data transmission protocol, Group 3 encoding incorporates error detection codes. Group 3 compression is most commonly used in the TIFF file format.
Lossiness Lossless
Released 01 Jan 1985
Withdrawn  
Developed by International Telecommunication Union
Supported by
Documentation CCITT Blue Book, 1989, Volume VII, Fascicle VII.3: Terminal equipment and protocols for telematic services, recommendations T.0 - T.63
Rights  
Note  
CCITT T.6  
Other names CCITT Group 4
Identifier(s) PUID:  x-cmp/15
Family  
Description Officially known as CCITT T.6, Group 4 is a compression algorithm developed by the International Telegraph and Telephone Consultative Committee as a development of the two-dimensional Group 3 standard for encoding and compressing 1-bit (monochrome) image data. It is faster, and offers compression rates which are typically double those of Group 3. Like Group 3, it is lossless and designed for 1-bit images. However, being designed as a storage rather than transmission format, it does not incorporate the error detection and correction functions of Group 3 compression. Group 4 compression is most commonly used in the TIFF file format.
Lossiness Lossless
Released 01 Jan 1989
Withdrawn  
Developed by International Telecommunication Union
Supported by
Documentation CCITT Blue Book, 1989, Volume VII, Fascicle VII.3: Terminal equipment and protocols for telematic services, recommendations T.0 - T.63
Rights  
Note  
Top of page Top of page
 
         
The National Archives Newsletter Icon

Send me The National Archives’ newsletter

A monthly round-up of news, blogs, offers and events.