A promising book, in which the level of details about "characters" and events is too detailed for the average reader, causing the book to fail in keeping its promises of constructing an accessible history of the origins of computers. It's more for the connaisseur than for the newbie....Continua
Dyson, George (2012). Turing’s Cathedral: The Origins of the Digital Universe. New York: Pantheon. 2012. ISBN 9780307907066. Pagine 338. 10,99 €
Chi mi conosce o mi segue sa ormai che non mi entusiasmo troppo facilmente per un libro. Eppure questo lungo saggio di George Dyson merita un’acclamazione. È stato tradotto da poco da Codice edizioni (La cattedrale di Turing. Le origini dell’universo digitale) e quindi, se non volete fare la fatica di leggerlo in inglese, potete correre a leggerlo in italiano: ma ve lo raccomando, in qualunque lingua.
Ho impiegato molto tempo a leggere il libro (prevalentemente nei viaggi in metropolitana) e ho avuto perciò il tempo di anticipare alcune impressioni e riflessioni, qui (a proposito di previsioni) e qui (su Ape and Essence di Aldous Huxley).
Scopro proprio adesso, tra l’altro, che Dyson ha presentato questo suo libro al Festival della scienza di Genova pochi giorni fa, il 28 ottobre 2012. Io peraltro lo avevo incontrato qualche anno fa a Roma, al Festival delle scienze all’Auditorium di Roma dove aveva parlato dello stesso libro (all’epoca in gestazione) con lo stesso interlocutore italiano, Vittorio Bo (Roma, Auditorium Parco della musica, “Tra Possibile e Immaginario” Festival delle Scienze 2010, giovedì 14 gennaio 2010 Sala Petrassi ore 21: La Cattedrale di Turing e l’universo digitale. Conferenza con George Dyson, John Brockman, Vittorio Bo).
Nel post che gli avevo dedicato all’epoca raccontavo della sua bella metafora su kaiak e canoe, in risposta alla domanda annuale di John Brockman, che nel 2009 era stata «In che modo Internet ha cambiato la tua vita?». Se non l’avevate letta allora, correte a rileggerla ora, perché è bellissima, vera e molto profonda. Anzi, la rimetto, così non avrete scuse:
KAYAKS vs CANOES
In the North Pacific ocean, there were two approaches to boatbuilding. The Aleuts (and their kayak-building relatives) lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beach-combed wood. The Tlingit (and their dugout canoe-building relatives) built their vessels by selecting entire trees out of the rainforest and removing wood until there was nothing left but a canoe.
The Aleut and the Tlingit achieved similar results — maximum boat / minimum material — by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unneccessary information to reveal the shape of knowledge hidden within.
I was a hardened kayak builder, trained to collect every available stick. I resent having to learn the new skills. But those who don’t will be left paddling logs, not canoes.
Quello che all’epoca non sapevo è che George Dyson, nipote figlio e fratello d’arte (di questo parleremo dopo), a 16 si era trasferito in British Columbia, nel Burrard Inlet a nord di Vancouver, abitando a lungo in una casa costruita con le sue mani con materiali di risulta a 30 metri d’altezza su un albero e fondando un laboratorio per kaiak di tipo baidarka.
In Turing’s Cathedral Dyson ricostruisce meticolosamente la nascita del primo computer digitale a Princeton nell’immediato dopoguerra. Il libro è ricchissimo di testimonianze e informazioni di prima mano. In questo, George Dyson è aiutato dalla circostanza di essere figlio del fisico Freeman Dyson e della sua prima moglie, la matematica Verena Huber-Dyson (oltre che fratello minore di Esther Dyson, e nipote del compositore inglese Sir George Dyson) e di aver passato l’infanzia e l’adolescenza all’Institute for Advenced Studies di Princeton. [Se posso raccontare un piccolissimo aneddoto personale: ho avuto l'avventura di passare un giorno allo IAS in visita a una persona che vi stava trascorrendo un periodo di ricerca, di pranzare alla sua leggendaria mensa e di incontrarvi il fragilissimo Freeman Dyson all'epoca 87enne.]
Ma il libro non è soltanto un’accurata ricostruzione storica. I singoli capitoli sono costruiti intorno alle tantissime persone interessanti, più o meno note, che hanno collaborato al progetto: su tutti giganteggia John von Neumann, ma Alan Turing e Kurt Gödel sono comprimari di lusso, per non parlare di comparse della stazza di Thorstein Veblen o di Stanislaw Ulam. Inoltre, ognuno dei 18 capitoli è incentrato – oltre che su uno o più degli scienziati che hanno contribuito al progetto – ai contributi che essi hanno dato a uno o più degli avanzamenti scientifici. Infine, emerge con nettezza, soprattutto nelle ultime pagine del saggio, l’idea cara a George Dyson che l’universo digitale percorra una sua propria traiettoria evoluzionistica. In questo senso è vero quanto ha scritto Janet Maslin nella recensione pubblicata sul San Jose Mercury News del 10 giugno 2012 (non vi metto il link perché è dietro un odioso paywall) che ha definito Turing’s Cathedral «a creation myth of the digital universe.»
Non vi dico altro: dovete leggerlo, se siete interessati agli argomenti che tratta, e anche se siete interessati a uno stile storiografico piuttosto originale.
Oltre alle solite citazione, che sono comprensibilmente moltissime e che riporterò alla fine, vorrei mettere il filmato di un intervento di George Dyson al TED e i link ad alcune recensioni comparse sulla stampa internazionale.
Ecco il link alla recensione di William Poundstone, pubblicata sul New York Times del 4 maggio 2012 (Unleashing the Power. ‘Turing’s Cathedral,’ by George Dyson), e quello alla recensione di Evgeny Morozov, pubblicata su The Observer del 25 marzo 2012 (Turing’s Cathedral by George Dyson – review).
* * *
Ecco le mie numerose annotazioni, con riferimento alle posizioni sul Kindle (sono annotazioni personali, che siete invitati ma non obbligati a leggere, naturalmente).
The term bit (the contraction, by 40 bits, of “binary digit”) was coined by statistician John W. Tukey shortly after he joined von Neumann’s project in November of 1945. 
The new machine was christened MANIAC (Mathematical and Numerical Integrator and Computer) and put to its first test, during the summer of 1951, with a thermonuclear calculation that ran for sixty days nonstop. 
What could be wiser than to give people who can think the leisure in which to do it? — Walter W. Stewart to Abraham Flexner, 1939 
Equations for gravitation, relativity, quantum theory, five perfect solids, and three conic sections were set into leaded glass windows, and the central mantelpiece featured a carving of a fly traversing the one-sided surface of a Möbius strip. 
Benoît Mandelbrot, who arrived at von Neumann’s invitation in the fall of 1953 to begin a study of word frequency distributions (sampling the occurrence of probably, sex, and Africa) that would lead to the field known as fractals […] 
We are Martians who have come to Earth to change everything—and we are afraid we will not be so well received. So we try to keep it a secret, try to appear as Americans … but that we could not do, because of our accent. So we settled in a country nobody ever has heard about and now we are claiming to be Hungarians. — Edward Teller, 1999 
The good news is that, as Leibniz suggested, we appear to live in the best of all possible worlds, where the computable functions make life predictable enough to be survivable, while the noncomputable functions make life (and mathematical truth) unpredictable enough to remain interesting, no matter how far computers continue to advance. 
“It was his genius at synthesizing and analyzing things. He could take large units, rings of operators, measures, continuous geometry, direct integrals, and express the unit in terms of infinitesimal little bits. And he could take infinitesimal little bits and put together large units with arbitrarily prescribed properties. That’s what Johnny could do, and what no one else could do as well.” 
Vladimir Kosma Zworykin was a pioneer of television (and the last entry in many encyclopedias) […] 
Mathematicians produce their best work at about the same time that they produce their children, and the nursery school helped keep the two apart. 
“I can see no essential difference between the materialism which includes soul as a complicated type of material particle and a spiritualism which includes material particles as a primitive type of soul,” Wiener added in 1934. 
Leibniz saw binary coding as the key to a universal language and credited its invention to the Chinese, seeing in the hexagrams of the I Ching the remnants of “a Binary Arithmetic … which I have rediscovered some thousands of years later.” 
“There it might be said that the complete description of its behavior is infinite because, in view of the non existence of a decision procedure predicting its behavior, the complete description could be given only by an enumeration of all instances. The universal Turing machine, where the ratio of the two complexities is infinity, might then be considered to be a limiting case.” 
Brownian motion — the random trajectory followed by a microscopic particle in response to background thermodynamic noise. 
Maxim 7 advised “Never estimate what may be accurately computed”; Maxim 8 advised “Never guess what may be estimated”; and, if a guess was absolutely necessary, “Never guess blindly” was Maxim 9. 
“We should clear any fog surrounding the notion of ‘prediction,’ ” Bigelow confessed. “Strictly and absolutely, no network operator—or human operator—can predict the future of a function of time.… So-called ‘leads’ evaluated by networks or any other means are actually ‘lags’ (functions of the known past) artificially reversed and added to the present value of the function.” 
“A binary counter is simply a pair of bistable cells communicating by gates having the connectivity of a Möbius strip.” 
[…] 1951 “Reliable Organizations of Unreliable Elements” and 1952 “Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components,” […] 
Some failures stem from lack of vision, and some failures from too much. 
“Consideration was given to the theory that the carbon dioxide content of the atmosphere has been increasing since the beginning of the industrial revolution, and that this increase has resulted in a warming of the atmosphere since that time,” the proceedings report. “Von Neumann questioned the validity of this theory, stating that there is reason to believe that most of the industrial carbon dioxide introduced into the atmosphere must already have been absorbed by the ocean.” The debate was on. 
Imagine a future, combining the visions of Lewis Fry Richardson with those of von Neumann, where the Earth (including much of its oceans) is covered by wind turbines immersed in the momentum flux of the atmosphere, and photovoltaics immersed in the radiation flux from the sun. Eventually enough of these energy-absorbing and energy-dissipating surfaces will be connected to the integrated global computing and power grid, to form, in effect, the great Laplacian lattice of which Charney and Richardson dreamed. Every cell in this system would account for its relations with its neighbors, keeping track of whether it was dark, or sunny, or windy, or calm, and how those conditions may be expected to change. Coupled directly to the real, physical energy flux would be a computational network that was no longer a model—or rather, was a model, in Charney and Richardson’s sense of the atmosphere constituting a model of itself. 
Monte Carlo opened a new domain in mathematical physics: distinct from classical physics, which considers the precise behavior of a small number of idealized objects, or statistical mechanics, which considers the collective behavior, on average, of a very large number of objects, Monte Carlo considers the individual, probabilistic behavior of an arbitrarily large number of individual objects, and is thus closer than either of the other two methods to the way the physical universe actually works. 
Biological evolution is, in essence, a Monte Carlo search of the fitness landscape, and whatever the next stage in the evolution of evolution turns out to be, computer-assisted Monte Carlo will get there first.
Monte Carlo is able to discover practical solutions to otherwise intractable problems because the most efficient search of an unmapped territory takes the form of a random walk. […] The genius of Monte Carlo—and its search-engine descendants—lies in the ability to extract meaningful solutions, in the face of overwhelming information, by recognizing that meaning resides less in the data at the end points and more in the intervening paths. [4563-4568]
“He was simultaneously one of the smartest people that I’ve ever met and one of the laziest—an interesting combination.” 
Ulam’s self-reproducing cellular automata—patterns of information persisting across time—evolve by letting order in but not letting order out. 
“GOD DOES NOT play dice with the Universe,” Albert Einstein advised physicist Max Born (Olivia Newton-John’s grandfather) in 1936. 
“Make life difficult but not impossible,” Barricelli recommended. “Let the difficulties be various and serious but not too serious; let the conditions be changing frequently but not too radically and not in the whole universe at the same time.” 
No matter how long you wait, numbers will never become organisms, just as nucleotides will never become proteins. But they may learn to code for them. 
“I would suspect, that a truly efficient and economical organism is a combination of the ‘digital’ and ‘analogy’ principle,” he wrote in his preliminary notes on “Reliable Organizations of Unreliable Elements” (1951). “The ‘analogy’ procedure loses precision, and thereby endangers significance, rather fast … hence the ‘analogy’ method can probably not be used by itself—‘digital’ restandardizations will from time to time have to be interposed.” 
How did complex polynucleotides originate, and how did these molecules learn to coordinate the gathering of amino acids and the construction of proteins as a result? He saw the genetic code “as a language used by primordial ‘collector societies’ of t[ransfer]RNA molecules … specialized in the collection of amino acids and possibly other molecular objects, as a means to organize the delivery of collected material.” He drew analogies between this language and the languages used by other collector societies, such as social insects, but warned against “trying to use the ant and bee languages as an explanation of the origin of the genetic code.” 
Aggregations of order codes evolved into collector societies, bringing memory allocations and other resources back to the collective nest. Numerical organisms were replicated, nourished, and rewarded according to their ability to go out and do things: they performed arithmetic, processed words, designed nuclear weapons, and accounted for money in all its forms. They made their creators fabulously wealthy, securing contracts for the national laboratories and fortunes for Remington Rand and IBM. 
Twenty-five years later, much of the communication between computers is not passive data, but active instructions to construct specific machines, as needed, on the remote host. 
Barricelli believed in intelligent design, but the intelligence was bottom-up. 
The origin of species was not the origin of evolution, and the end of species will not be its end.
And the evening and the morning were the fifth day. 
“One of the facets of extreme originality is not to regard as obvious the things that lesser minds call obvious,” […] 
Complicated behavior does not require complicated states of mind. 
The title “On Computable Numbers” (rather than “On Computable Functions”) signaled a fundamental shift. Before Turing, things were done to numbers. After Turing, numbers began doing things. By showing that a machine could be encoded as a number, and a number decoded as a machine, “On Computable Numbers” led to numbers (now called “software”) that were “computable” in a way that was entirely new. 
The relations between patience, ingenuity, and intuition led Turing to begin thinking about cryptography, where a little ingenuity in encoding a message can resist a large amount of ingenuity if the message is intercepted along the way. […] A Turing machine can also be instructed to search for meaningful statements, but since there will always be uncountably more meaningless statements than meaningful ones, concealment would appear to win. [5752-5755]
“When the war started probably only two people thought that the Naval Enigma could be broken,” explained Hugh Alexander, in an internal history written at the end of the war. “Birch [Alexander’s boss] thought it could be broken because it had to be broken and Turing thought it could be broken because it would be so interesting to break it.” 
Jack Good would later explain that “the ultraintelligent machine … is a machine that believes people cannot think.”
Digital computers are able to answer most—but not all—questions stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). Computers have been getting better and better at providing answers—but only to questions that programmers are able to ask. What about questions that computers can give useful answers to but that are difficult to define? [5969-5971]
The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand. The path to artificial intelligence, suggested Turing, is to construct a machine with the curiosity of a child, and let intelligence evolve. 
Search engines are copy engines: replicating everything they find. When a search result is retrieved, the data are locally replicated: on the host computer and at various servers and caches along the way. Data that are widely replicated, or associated frequently by search requests, establish physical proximity that is manifested as proximity in time. More meaningful results appear higher on the list not only because of some mysterious, top-down, weighting algorithm, but because when microseconds count, they are closer, from the bottom up, in time. Meaning just seems to “come to mind” first. 
Structure can always be replaced by code. 
Biology has been doing this all along. Life relies on digitally coded instructions, translating between sequence and structure (from nucleotides to proteins), with ribosomes reading, duplicating, and interpreting the sequences on the tape. 
In biology, the instructions say, “DO THIS with the next copy of THAT which comes along.” THAT is identified not by a numerical address defining a physical location, but by a molecular template that identifies a larger, complex molecule by some smaller, identifiable part. This is the reason that organisms are composed of microscopic (or near-microscopic) cells, since only by keeping all the components in close physical proximity will a stochastic, template-based addressing scheme work fast enough. There is no central address authority and no central clock. Many things can happen at once. This ability to take general, organized advantage of local, haphazard processes is the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers. 
Part of the problem, as Jack Good put it in 1962, is that “analogue computers are stupidly named; they should be named continuous computers.” […] “If the only demerit of the digital expansion system were its greater logical complexity, nature would not, for this reason alone, have rejected it,” von Neumann admitted in 1948.
Search engines and social networks are analog computers of unprecedented scale. Information is being encoded (and operated upon) as continuous (and noise-tolerant) variables such as frequencies (of connection or occurrence) and the topology of what connects where, with location being increasingly defined by a fault-tolerant template rather than by an unforgiving numerical address. [6337-6345]
“[…] It is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object.” 
Self-reproduction is an accident that only has to happen once. 
Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components 
Model of General Economic Equilibrium 
We measure our economy in money, not in things, and have yet to develop economic models that adequately account for the effects of self-reproducing machines and self-replicating codes. 
Evolution in the digital universe now drives evolution in our universe, rather than the other way around. 
Over long distances, it is expensive to transport structures, and inexpensive to transmit sequences. Turing machines, which by definition are structures that can be encoded as sequences, are already propagating themselves, locally, at the speed of light. The notion that one particular computer resides in one particular location at one time is obsolete. 
“My own personal theory is that extraterrestrial life could be here already … and how would we necessarily know? If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there’s a digital world—a civilization that has discovered the Universal Turing Machine—for it to colonize when it gets there. And that’s why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life.”
There was a long, drawn-out pause. “Look,” Teller finally said, lowering his voice to a raspy whisper, “may I suggest that instead of explaining this, which would be hard … you write a science-fiction book about it.”
“Probably someone has,” I said.
“Probably,” answered Teller, “someone has not.” 
By mid-1953, five distinct sets of problems were running on the MANIAC, characterized by different scales in time: (1) nuclear explosions, over in microseconds; (2) shock and blast waves, ranging from microseconds to minutes; (3) meteorology, ranging from minutes to years; (4) biological evolution, ranging from years to millions of years; and (5) stellar evolution, ranging from millions to billions of years. All this in 5 kilobytes—enough memory for about one-half second of audio, at the rate we now compress music into MP3s. 
The middle of this range falls between 104 and 105 seconds, or about eight hours, exactly in the middle of the range (from the blink of an eye, over in three-tenths of a second, to a lifetime of three billion seconds, or ninety years) that a human being is able to directly comprehend. 
The “last mile” problem — how to reach individual devices without individual connection costs — has evaporated with the appearance of wireless devices […] 
This is why it is so difficult to make predictions, within the frame of reference of our universe, as to the future of the digital universe, where time as we know it does not exist. 
VON NEUMANN MADE a deal with “the other party” in 1946. The scientists would get the computers, and the military would get the bombs. This seems to have turned out well enough so far, because, contrary to von Neumann’s expectations, it was the computers that exploded, not the bombs. 
Alfvén also argued, without convincing the orthodoxy, that the large-scale structure of the universe might be hierarchical to infinity, rather than expanding from a single source. Such a universe — fulfilling Leibniz’s ideal of everything from nothing — would have an average density of zero but infinite mass. 
The power of the genetic code, as both Barricelli and von Neumann immediately recognized, lies in its ambiguity: exact transcription but redundant expression. In this lies the future of digital code. 
So as not to shoot down commercial airliners, the SAGE (Semi-Automatic Ground Environment) air defense system that developed out of MIT’s Project Whirlwind in the 1950s kept track of all passenger flights, developing a real-time model that led to the SABRE (Semi-Automatic Business-Related Environment) airline reservation system that still controls much of the passenger traffic today. 
It is easier to write a new code than to understand an old one. — John von Neumann to Marston Morse, 1952 
[…] one for the Bureau of the Census […] 
“The agreement with Poisson’s law of improbable events draws our attention to the existence of a persistent background of probability,” he concluded. “If the beginnings of wars had been the only facts involved, we might have called it a background of pugnacity. But, as the ends of wars have the same distribution, the background appears to be composed of a restless desire for change.”
“Science thrives on openness,” he reflected in 1981, “but during World War II we were obliged to put secrecy practices into effect. After the war, the question of secrecy was reconsidered … but the practice of classification continued; it was our ‘security,’ whether it worked or failed.… The limitations we impose on ourselves by restricting information are far greater than any advantage others could gain.” 
At a Friends meeting, silence is a form of communication, an exception to Bigelow’s rule that absence of a signal should never be used as a signal. ...Continua