Measuring Information
David Chapman, The Open University, UK
In his celebrated 1948 paper on A Mathematical Theory of Communication, Claude Shannon, building on the work of Ralph Hartley and Harry Nyquist, used a measure of information based on the entropy function (H=-〖p_i log〗〖p_i 〗) that has become the flagship of information theories. In the words of David Mackay, Shannon’s 1948 paper “both created the field of information theory and solved most of its fundamental problems”. It has been phenomenally successful, and as of February 2014 had been cited by more than 10,000 documents in the Web of Science database. Though it addressed the specific task of modelling electrical communication systems, the breadth of fields among those 10,000 citations – including such unexpected topics as Pediatrics, Fisheries, Public Administration, Women’s Studies, Art and Religion – reveals that it has found application a long way beyond its origins.
The application of Shannon’s model to diverse fields such as these started almost as soon the paper appeared, but the legitimacy of doing so has been hotly debated from the start. Arguments rage around issues such the interpretation of the word ‘information’ in Shannon’s model, and the applicability of the model (sometimes referred to – often disparagingly – as the conduit model of communication) to communication outside of engineering.
The basics of the Shannon model will be presented, some of the context and issues surrounding what it says about information discussed, and the insights it can offer will be illustrated by the example of modelling the information content of school reports.
Shannon’s entropy function is the flagship, but it is not the only measure proposed for information.
Whereas Shannon’s measure of information comes from a communications engineering perspective, algorithmic information is a different measure which comes from computer science. Pioneered by Andrey Kolmogorov, Ray Solomonoff and Gregory Chaitin, the algorithmic information content (Kolmogorov complexity) is the smallest description that can specify an object. Something which to human perception is apparently complex might be specifiable by a compact algorithm.
The basics of algorithmic information will be presented, the insights it can offer into the nature of information explored, and algorithmic information compared with Shannon information.