If you ever took an English course. you learned that “redundancy” is a bad thing. It means useless, wasted repetition. If you live in the UK you dread being made “redundant” because it means your boss has no use for you and has laid you off.
Only in engineering can it have a positive connotation – for example, a redundant duplicate backup system can be a good idea for safety and reliability. It’s repetition but it’s not useless or wasted.
In fact redundancy is widely used in practice and duplication/backup is only the simplest form. I discovered this trying to make sense of a multi-topic introductory computer science course.
A while back my UVic colleague Mary Sanseverino and I were both teaching this course and we were looking for unifying themes. Some were obvious – levels of abstraction, modularity, iteration vs recursion. One surprising theme that popped up was redundancy.
For example, redundancy played a big role in the design and operation of ENIAC, the first modern computer. The ENIAC had 18,000 vacuum tubes and the conventional wisdom was that a device this large would fail too regularly to be useful. The design of the circuits was redundant and the tubes were operated (in terms of voltage etc) well below their official ratings. Testing and preventive maintenance also reduced failures. So did keeping the machine running continuously – most tube failures occurred during power up/down.
The most striking example was the power supply. The mains power drove an electric motor that powered a generator! (This smoothed out fluctuations).
What notion of redundancy is this? A general one, that Mary S. and I came up with. Namely
devoting more than the bare minimum of resources to achieve a better result
For example, ENIAC worked connected directly to the mains but was more reliable with the redundant motor/generator pair.
We didn’t have to look much further to find other examples of this kind of redundancy. In the digital logic gates they had AND, OR and NOT, even though either of the first two can be computed using the remaining two. The instruction set was similarly redundant, with for example an add operation, a subtract operation and a negation operation.
Successors of ENIAC copied its redundancy and introduced even more with assembly language. A second programming language (after machine language) is already redundant, since machine language is in principle enough. Symbolic names are redundant, as are symbolic addresses. So is the need to declare all symbolic addresses used.
Then came the high level languages – many of them, a clear case of redundancy. High level languages themselves are highly redundant, the general rule being the higher the level, the more redundant.
For example, they typically have for, while and even until constructs even though while is enough. Variables and their types must be declared. The same goes for the number and type of arguments of procedures/methods. Constructs are closed by keywords (such as endif or endcase) even though in principle a generic end would be enough. Typically every case construct must have a default branch even if the other cases cover all the situations which will arise.
One especially clear example is the assert statement of, say, Python. An assert statement checks that something that should be true, is, and therefore normally contributes nothing. In general many forms of redundancy involve making sure or at least checking that something that shouldn’t happen doesn’t happen.
The original ENIAC was a general purpose programmable computer even though it was designed for computing ballistic tables, for which a simpler design would have sufficed. The extra power/generality was redundant. (Ironically, they eventually added redundant special-purpose stored programs for ballistics).
In what sense does this redundant generality give a “better result”? In the sense that the device (or whatever) can be used for purposes not anticipated. (Alan Kay once said that this property is a hallmark of good design.) Personal computers continued this tradition – they were general purpose even though they were designed for playing games and storing recipes. The redundancy really paid off when the web was invented. Nobody anticipated the web but everything was in place to implement it.
It should be obvious that both software and hardware (I haven’t even mentioned caching) embody multiple layers of redundancy. Our systems would be useless without them.
What about real life? Can we use redundancy in real life? We can and do (think: copilots) but not as much as we could, because redundancy requires extra resources. As a general rule, in today’s society, resources are chronically scarce.
For example, I’ve often thought that courses would be better if given by a pair of instructors. I don’t mean “team teaching” where two instructors take turns giving lectures. I mean two instructors in the class at the same time.
A dumb way of using the second instructor is to have him/her sitting in the corner waiting to step in if the first falls ill. We can do much better than that!
The second instructor could operate the powerpoint, wipe the blackboard, or (more challenging) circulate in the classroom answering student questions. The two could could hold dialogues, question and answer sessions, even argue. Each could watch and correct mistakes made by the other and intervene if they think the class is not following. In dealing with important points the second instructor could give a second, different explanation (redundant, of course).
Naturally the two would frequently swap roles (not strictly necessary and therefore also redundant).
A dream, of course, because colleges are (as usual) forced to teach using the bare minimum of resources.
I may be a dreamer, but I’m not the only one. As Voltaire famously said
Le superflu, chose très nécessaire
Here it is in English
The superfluous, a very necessary thing
though of course this is redundant.