Teletubbies were the exact opposite of humans. Our visual cues are as primitive as their infantile babbling, while our speech serves as the main way of communication. Why are not we Teletubbies? Wouldn’t flashing images be a superior way of communication?
Judging by our art, this is not the case. The least expressive artistic medium is 3D sculpture, despite being the most realistic. Art galleries and museums are full of people admiring 2D objects, but this is not what they do on every hour of every day, and such images can only be understood as part of a broader narrative telling how such objects needs to be perceived. The most expressive means are 1D narratives. One can point to the movies and TV as example of 2D progression in time, but these are illustrated stories that began as scripts.
There is another telling example: life. Our traits are written in 1D code. This code is translated into elaborate 3D objects, like a script is translated into a theatrical production. In this 3D production, part of the show is writing a 1D script telling how the next show is to be staged. There is no a priori reason for the genetic carrier to be 1D. One can argue that 3D scripts would be physically difficult to handle in 3D space, but 2D matrix is readily accessible at any point.
Even our own 2D storage devices (hard drives, CDs, barcodes) are only nominally 2D; they retrieve data as 1D bit streams. The reason is the necessity of error correction. There's little to the art of such correction, if you are willing to proof read and have multiple copies of the data, but that sacrifices speed of communication/transmission and storage space. If those are of little concern, you get a Jewish scribe slowly and faithfully reproducing the word of the Bible on the parchment. The problem emerges when the speed and space ARE the concern. The art begins when one needs to minimize the space dedicated to check bits while maximizing the transmission rate without sacrificing the fidelity.
An average CD corrects for random errors of 2 bytes per 32 byte block; more importantly, it corrects for burst errors of up to 4 kB in length; such clustered errors are caused by scratches on the disk. Due to the general preponderance of such errors, there is strong incentive to interleave data, so the burst errors appear as random errors spread over many blocks (that can be individually corrected). A lot of buffering and processing memory goes into this error correction operation. 2D data (say, in bar codes) are transformed into 1D bit streams and then error corrected using Reed-Solomon coding.
Message symbols become coefficients of a polynomial, the latter is multiplied by a specially constructed polynomial, and Galois field properties of such polynomials are exploited in a clever way, so that about 6-10% extra bits are sufficient to detect random errors. All of our electronic devices are based on this general approach invented in the 1960s. 2D data need to be processed as 1D streams, as the error correcting methods are inherently 1D; it does not matter how the data are stored or retrieved. This is what you need to do if you are seriously concerned about the accuracy and speed of communication.
Replication of DNA does not work like that at all, although it shares many of the same concerns. The approach is not clever error correction methods applied to a chunk of data; it is the Jewish scribe approach of careful proof reading and mismatch repair based on the complementarity of the two strands in the double helix (but doing it rapidly). If there is a double strand break, the damage cannot be repaired unless there are multiple copies. Maintaining such copies is an expensive thing to do. Only desperados leaving in harsh environments go to such extremes
Cellular machinery is designed to prevent the occurrence of such irreparable damage rather than repairing this damage after it occurred. There is little redundancy; no check bits. A bacterium cannot spare even 10% of its DNA for error correction, as it would squander vital resources that can be used for (almost accurate) replication. A certain fraction of mutations is tolerated. Neither can the bacterium implement anything like Reed-Solomon coding to minimize the amount of check bits to this comfortably low percentage; so the extra space dedicated to no productive use would be much larger. It is not worth it. Life of a single bacterium is worthless anyway.
We are not bacteria; we can afford to be wasteful, but our code is inherited from the creatures that cannot afford such luxuries. Nature’s way of dealing with burst errors is by not dealing with them at all; the organism is not viable, and such errors do not propagate. The results of the two approaches are comparable: the fidelity of DNA replication and digital transmission is about the same, about 1e-9. There are errors in DNA replication, but they come in well-defined categories, such as nucleotide replacement errors, insertions/deletions, frame shifts and slippages, duplications. There are only so many types of errors occurring in 1D system with built-in complementarity, while many more possibilities exist in a 2D system (faults, dislocations, etc.). My feeling is that 2D coding would be impossible without buffered-memory and block error-correcting codes, which is, in turn, impossible to evolve in a step-by-step fashion, while 1D replication of the observed type can be.
There is another, even more important concern: viral invaders. If you only have 1D storage, they can only incorporate themselves in a few ways (they cannot extensively disperse themselves) and so can be potentially recognized and dealt with. Just think in how many ways 1D sequence (without even cutting itself into smaller pieces) can incorporate itself into a 2D matrix! The only way to intercept such invaders would be before they insert, as otherwise their detection would require block error correction.
Even in 1D this is a major problem. One way of dealing with it (observed in Tetrahymena protozoa) is, once again, having two copies of the genome. RNA copy migrates from one nucleus to another; if it finds a perfect match it self-destroys. If it does not, it means alien DNA is present and the RNA copy survives; during the replication it destroys the DNA strand through RNAi-like mechanism.
However, such tricks are generally not worth pursuing, and eukaryotic genomes are overrun with garbage: it is easier to copy all this stuff (and find uses for it) then going through the Herculean ordeal of excising viral DNA. That is to say that at the molecular level Teletubbies are VERY unlikely.
I can generalize and claim that Teletubbies are unlikely in any situation when high-fidelity communication is required when errors and interference are likely to occur.
This explains why we are not Teletubbies in our general design, but this rationale does quite explain why we are not communicating like Teletubbies. Indeed, our language has no error correction features that are the mark of high-fidelity systems. It is simply not designed for reliable transmission of information. In fact, it seems to be designed for the least reliable transmission of information.
In the next post I will argue that we are not Teletubbies specifically for this reason.