Dale Brouk, Director of Real Estate at The National said, “We’ve not had an event like this in a long time great opportunity for people to come out to and see all …

Rain Sweet Dew


The Tao is always nameless.
And even though a sapling might be small
No one can make it be his subject.
If rulers could embody this principle
The myriad things would follow on their own.
Heaven and Earth would be in perfect accord
And rain sweet dew.

People, unable to deal with It on its own terms
Make adjustments;
And so you have the beginning of division into names.
Since there are already plenty of names
You should know where to stop.
Knowing where to stop, you can avoid danger.

The Tao’s existence in the world

Is like valley streams running into the rivers and seas.


[ ] [ [ [ ] ] ]
[ ] [ [ ] ] ] ]
[ ] [ [ [ ] [ [
[ ] [ [ ] [ [ ]
[ ] [ ] [ [ ] ]
[ ] [ [ ] ] [ [
[ ] [ [ ] ] ] ]
[ ] [ ] [ ] ] [
[ ] [ [ [ ] [ ]

[ The Heretic – The Morning News ]
[The Development of the Space-Time View of Quantum Electrodynamics ]
[ karymullis.com ]
[ Discovery of DNA Double Helix: Watson and Crick ]

Civil disobedience is a Dyson Sphere

Dynamics of Josephson Junctions and Circuits by Likharev

The Scientist as Rebel
MAY 25, 1995 Freeman Dyson

There is no such thing as a unique scientific vision, any more than there is a unique poetic vision. Science is a mosaic of partial and conflicting visions. But there is one common element in these visions. The common element is rebellion against the restrictions imposed by the locally prevailing culture, Western or Eastern as the case may be. The vision of science is not specifically Western. It is no more Western than it is Arab or Indian or Japanese or Chinese. Arabs and Indians and Japanese and Chinese had a big share in the development of modern science. And two thousand years earlier, the beginnings of ancient science were as much Babylonian and Egyptian as Greek. One of the central facts about science is that it pays no attention to East and West and North and South and black and yellow and white. It belongs to everybody who is willing to make the effort to learn it. And what is true of science is also true of poetry. Poetry was not invented by Westerners. India has poetry older than Homer. Poetry runs as deep in Arab and Japanese culture as it does in Russian and English. Just because I quote poems in English, it does not follow that the vision of poetry has to be Western. Poetry and science are gifts given to all of humanity.
For the great Arab mathematician and astronomer Omar Khayyam, science was a rebellion against the intellectual constraints of Islam, a rebellion which he expressed more directly in his incomparable verses:

And that inverted Bowl they call the Sky,
Whereunder crawling cooped we live and die,
Lift not your hands to It for help,
for it
As impotently rolls as you or I.

For the first generations of Japanese scientists in the nineteenth century, science was a rebellion against their traditional culture of feudalism. For the great Indian physicists of this century, Raman, Bose, and Saha, science was a double rebellion, first against English domination and second against the fatalistic ethic of Hinduism. And in the West, too, great scientists from Galileo to Einstein have been rebels. Here is how Einstein himself described the situation:

When I was in the seventh grade at the Luitpold Gymnasium in Munich, I was summoned by my home-room teacher who expressed the wish that I leave the school. To my remark that I had done nothing amiss, he replied only, “Your mere presence spoils the respect of the class for me.”

Einstein was glad to be helpful to the teacher. He followed the teacher’s advice and dropped out of school at the age of fifteen.
From these and many other examples we see that science is not governed by the rules of Western philosophy or Western methodology. Science is an alliance of free spirits in all cultures rebelling against the local tyranny that each culture imposes on its children. Insofar as I am a scientist, my vision of the universe is not reductionist or anti-reductionist. I have no use for Western isms of any kind. Like Loren Eiseley, I feel myself a traveler on a journey that is far longer than the history of nations and philosophies, longer even than the history of our species.
A few years ago an exhibition of Paleolithic cave art came to the Museum of Natural History in New York. It was a wonderful opportunity to see in one place the carvings in stone and bone that are normally kept in a dozen separate museums in France. Most of the carvings were done in France about 14,000 years ago, during a short flowering of artistic creation at the very end of the last Ice Age. The beauty and delicacy of the carving is extraordinary. The people who carved these objects cannot have been ordinary hunters amusing themselves in front of the cave fire. They must have been trained artists sustained by a high culture.
And the greatest surprise, when you see these objects for the first time, is the fact that their culture is not Western. They have no resemblance at all to the primitive art that arose ten thousand years later in Mesopotamia and Egypt and Crete. If I had not known that the old cave art was found in France, I would have guessed that it came from Japan. The style looks today more Japanese than European. That exhibition showed us vividly that over periods of 10,000 years the distinctions between Western and Eastern and African cultures lose all meaning. Over a time span of a 100,000 years we are all Africans. And over a time span of 300 million years we are all amphibians, waddling uncertainly out of dried-up ponds onto the alien and hostile land.
And with this long view of the past goes Robinson Jeffers’s even longer view of the future. In the long view, not only European civilization but the human species itself is transitory. Here is the vision of Robinson Jeffers, expressed in different parts of his long poem “The Double Axe.”

Come, little ones.
You are worth no more than the foxes and yellow wolfkins, yet I will give you wisdom.
O future children:
Trouble is coming; the world as of the present time
Sails on its rocks; but you will be born and live
Afterwards. Also a day will come when the earth
Will scratch herself and smile and rub off humanity:
But you will be born before that.

Time will come, no doubt,
When the sun too shall die; the planets will freeze, and the air on them; frozen gases, white flakes of air
Will be the dust: which no wind ever will stir: this very dust in dim starlight glistening
Is dead wind, the white corpse of wind.
Also the galaxy will die; the glitter of the Milky Way, our universe, all the stars that have names are dead.
Vast is the night. How you have grown, dear night, walking your empty halls, how tall!

Robinson Jeffers was no scientist, but he expressed better than any other poet the scientist’s vision. Ironic, detached, contemptuous like Einstein of national pride and cultural taboos, he stood in awe of nature alone. He stood alone in uncompromising opposition to the follies of the Second World War. His poems during those years of patriotic frenzy were unpublishable. “The Double Axe” was finally published in 1948, after a long dispute between Jeffers and his editors. I discovered Jeffers thirty years later, when the sadness and the passion of the war had become a distant memory. Fortunately, his works are now in print and you can read them for yourselves.
Science as subversion has a long history. There is a long list of scientists who sat in jail and of other scientists who helped get them out and incidentally saved their lives. In our century we have seen the physicist Landau sitting in jail in the Soviet Union and Kapitsa risking his own life by appealing to Stalin to let Landau out. We have seen the mathematician André Weil sitting in jail in Finland during the Winter War of 1939–1940 and Lars Ahlfors saving his life. The finest moment in the history of the Institute for Advanced Study, where I work, came in 1957, when we appointed the mathematician Chandler Davis a member of the Institute, with financial support provided by the American government through the National Science Foundation. Chandler was then a convicted felon because he refused to rat on his friends when questioned by the House Un-American Activities Committee. He had been convicted of contempt of Congress for not answering questions and had appealed against his conviction to the Supreme Court.
While his case was under appeal, he came to Princeton and continued doing mathematics. That is a good example of science as subversion. After his Institute fellowship was over, he lost his appeal and sat for six months in jail. Chandler is now a distinguished professor at the University of Toronto and is actively engaged in helping people in jail to get out. Another example of science as subversion is Andrei Sakharov. Chandler Davis and Sakharov belong to an old tradition in science that goes all the way back to the rebels Franklin and Priestley in the eighteenth century, to Galileo and Giordano Bruno in the seventeenth and sixteenth. If science ceases to be a rebellion against authority, then it does not deserve the talents of our brightest children. I was lucky to be introduced to science at school as a subversive activity of the younger boys. We organized a Science Society as an act of rebellion against compulsory Latin and compulsory football. We should try to introduce our children to science today as a rebellion against poverty and ugliness and militarism and economic injustice.
The vision of science as rebellion was articulated in Cambridge with great clarity on February 4, 1923, in a lecture by the biologist J.B.S. Haldane to the Society of Heretics. The lecture was published as a little book with the title Daedalus. Here is Haldane’s vision of the role of scientist. I have taken the liberty to abbreviate Haldane slightly and to omit the phrases that he quoted in Latin and Greek, since unfortunately I can no longer assume that the heretics of Cambridge are fluent in those languages.

The conservative has but little to fear from the man whose reason is the servant of his passions, but let him beware of him in whom reason has become the greatest and most terrible of the passions. These are the wreckers of outworn empires and civilizations, doubters, disintegrators, deicides. In the past they have been men like Voltaire, Bentham, Thales, Marx, but I think that Darwin furnishes an example of the same relentlessness of reason in the field of science. I suspect that as it becomes clear that at present reason not only has a freer play in science than elsewhere, but can produce as great effects on the world through science as through politics, philosophy or literature, there will be more Darwins.
We must regard science, then, from three points of view. First, it is the free activity of man’s divine faculties of reason and imagination. Secondly, it is the answer of the few to the demands of the many for wealth, comfort and victory, gifts which it will grant only in exchange for peace, security and stagnation. Finally it is man’s gradual conquest, first of space and time, then of matter as such, then of his own body and those of other living beings, and finally the subjugation of the dark and evil elements in his own soul.

I have already made it clear that I have a low opinion of reductionism, which seems to me to be at best irrelevant and at worst misleading as a description of what science is about. Let me begin with pure mathematics. Here the failure of reductionism has been demonstrated by rigorous proof. This will be a familiar story to many of you. The great mathematician David Hilbert, after thirty years of high creative achievement on the frontiers of mathematics, walked into a blind alley of reductionism. In his later years he espoused a program of formalization, which aimed to reduce the whole of mathematics to a collection of formal statements using a finite alphabet of symbols and a finite set of axioms and rules of inference. This was reductionism in the most literal sense, reducing mathematics to a set of marks written on paper, and deliberately ignoring the context of ideas and applications that give meaning to the marks. Hilbert then proposed to solve the problems of mathematics by finding a general process that could decide, given any formal statement composed of mathematical symbols, whether that statement was true or false. He called the problem of finding this decision process the Entscheidungsproblem. He dreamed of solving theEntscheidungsproblem and thereby solving as corollaries all the famous unsolved problems of mathematics. This was to be the crowning achievement of his life, the achievement that would outshine all the achievements of earlier mathematicians who solved problems only one at a time.
The essence of Hilbert’s program was to find a decision process that would operate on symbols in a purely mechanical fashion, without requiring any understanding of their meaning. Since mathematics was reduced to a collection of marks on paper, the decision process should concern itself only with the marks and not with the fallible human intuitions out of which the marks were reduced. In spite of prolonged efforts of Hilbert and his disciples, the Entscheidungsproblemwas never solved. Success was achieved only in highly restricted domains of mathematics, excluding all the deeper and more interesting concepts. Hilbert never gave up hope, but as the years went by his program became an exercise in formal logic having little connection with real mathematics. Finally, when Hilbert was seventy years old, Kurt Gödel proved by a brilliant analysis that theEntscheidungsproblem as Hilbert formulated it cannot be solved.
Gödel proved that, in any formulation of mathematics including the rules of ordinary arithmetic, a formal process for separating statements into true and false cannot exist. He proved the stronger result which is now known as Gödel’s Theorem, that in any formalization of mathematics including the rules of ordinary arithmetic there are meaningful arithmetical statements that cannot be proved true or false. Gödel’s Theorem shows conclusively that in pure mathematics reductionism does not work. To decide whether a mathematical statement is true, it is not sufficient to reduce the statement to marks on paper and to study the behavior of the marks. Except in trivial cases, you can decide the truth of a statement only by studying its meaning and its context in the larger world of mathematical ideas.
It is a curious paradox that several of the greatest and most creative spirits in science, after achieving important discoveries by following their unfettered imaginations, were in their later years obsessed with reductionist philosophy and as a result became sterile. Hilbert was a prime example of this paradox. Einstein was another. Like Hilbert, Einstein did his great work up to the age of forty without any reductionist bias. His crowning achievement, the general relativistic theory of gravitation, grew out of a deep physical understanding of natural processes. Only at the very end of his ten-year struggle to understand gravitation did he reduce the outcome of his understanding to a finite set of field equations. But like Hilbert, as he grew older he concentrated his attention more and more on the formal properties of his equations, and he lost interest in the wider universe of ideas out of which the equations arose.
His last twenty years were spent in a fruitless search for a set of equations that would unify the whole of physics, without paying attention to the rapidly proliferating experimental discoveries that any unified theory would finally have to explain. I do not need to say more about this tragic and well-known story of Einstein’s lonely attempt to reduce physics to a finite set of marks on paper. His attempt failed as dismally as Hilbert’s attempt to do the same thing with mathematics. I shall instead discuss another aspect of Einstein’s later life, an aspect that has received less attention than his quest for the unified field equations: his extraordinary hostility to the idea of black holes.
Black holes were invented by Oppenheimer and Snyder in 1940. Starting from Einstein’s theory of general relativity, Oppenheimer and Snyder found solutions of Einstein’s equations that described what happens to a massive star when it has exhausted its supplies of nuclear energy. The star collapses gravitationally and disappears from the visible universe, leaving behind only an intense gravitational field to mark its presence. The star remains in a state of permanent free fall, collapsing endlessly inward into the gravitational pit without ever reaching the bottom. This solution of Einstein’s equations was profoundly novel. It has had enormous impact on the later development of astrophysics.
We now know that black holes ranging in mass from a few suns to a few billion suns actually exist and play a dominant role in the economy of the universe. In my opinion, the black hole is incomparably the most exciting and the most important consequence of general relativity. Black holes are the places in the universe where general relativity is decisive. But Einstein never acknowledged his brainchild. Einstein was not merely skeptical, he was actively hostile to the idea of black holes. He thought that the black hole solution was a blemish to be removed from his theory by a better mathematical formulation, not a consequence to be tested by observation. He never expressed the slightest enthusiasm for black holes, either as a concept or as a physical possibility. Oddly enough, Oppenheimer too in later life was uninterested in black holes, although in retrospect we can say that they were his most important contribution to science. The older Einstein and the older Oppenheimer were blind to the mathematical beauty of black holes, and indifferent to the question whether black holes actually exist.
How did this blindness and this indifference come about? I never discussed this question directly with Einstein, but I discussed it several times with Oppenheimer and I believe that Oppenheimer’s answer applies equally to Einstein. Oppenheimer in his later years believed that the only problem worthy of the attention of a serious theoretical physicist was the discovery of the fundamental equations of physics. Einstein certainly felt the same way. To discover the right equations was all that mattered. Once you had discovered the right equations, then the study of particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students. In Oppenheimer’s view, it would be a waste of his precious time, or of mine, to concern ourselves with the details of particular solutions. This was how the philosophy of reductionism led Oppenheimer and Einstein astray. Since the only purpose of physics was to reduce the world of physical phenomena to a finite set of fundamental equations, the study of particular solutions such as black holes was an undesirable distraction from the general goal. Like Hilbert, they were not content to solve particular problems one at a time. They were entranced by the dream of solving all the basic problems at once. And as a result, they failed in their later years to solve any problems at all.
In the history of science it happens not infrequently that a reductionist approach leads to a spectacular success. Frequently the understanding of a complicated system as a whole is impossible without an understanding of its component parts. And sometimes the understanding of a whole field of science is suddenly advanced by the discovery of a single basic equation. Thus it happened that the Schroedinger equation in 1926 and the Dirac equation in 1927 brought a miraculous order into the previously mysterious processes of atomic physics. The equations of Schroedinger and Dirac were triumphs of reductionism. Bewildering complexities of chemistry and physics were reduced to two lines of algebraic symbols. These triumphs were in Oppenheimer’s mind when he belittled his own discovery of black holes. Compared with the abstract beauty and simplicity of the Dirac equation, the black hole solution seemed to him ugly, complicated, and lacking in fundamental significance.
But it happens at least equally often in the history of science that the understanding of the component parts of a composite system is impossible without an understanding of the behavior of the system as a whole. And it often happens that the understanding of the mathematical nature of an equation is impossible without a detailed understanding of its solutions. The black hole is a case in point. One could say without exaggeration that Einstein’s equations of general relativity were understood only at a very superficial level before the discovery of the black hole. During the fifty years since the black hole was invented, a deep mathematical understanding of the geometrical structure of space—time has slowly emerged, with the black hole solution playing a fundamental role in the structure. The progress of science requires the growth of understanding in both directions, downward from the whole to the parts and upward from the parts to the whole. A reductionist philosophy, arbitrarily proclaiming that the growth of understanding must go only in one direction, makes no scientific sense. Indeed, dogmatic philosophical beliefs of any kind have no place in science.
Science in its everyday practice is much closer to art than to philosophy. When I look at Gödel’s proof of his undecidability theorem, I do not see a philosophical argument. The proof is a soaring piece of architecture, as unique and as lovely as Chartres Cathedral. Gödel took Hilbert’s formalized axioms of mathematics as his building blocks and built out of them a lofty structure of ideas into which he could finally insert his undecidable arithmetical statement as the keystone of the arch. The proof is a great work of art. It is a construction, not a reduction. It destroyed Hilbert’s dream of reducing all mathematics to a few equations, and replaced it with a greater dream of mathematics as an endlessly growing realm of ideas. Gödel proved that in mathematics the whole is always greater than the sum of the parts. Every formalization of mathematics raises questions that reach beyond the limits of the formalism into unexplored territory.
The black hole solution of Einstein’s equations is also a work of art. The black hole is not as majestic as Gödel’s proof, but it has the essential features of a work of art: uniqueness, beauty, and unexpectedness. Oppenheimer and Snyder built out of Einstein’s equations a structure that Einstein had never imagined. The idea of matter in permanent free fall was hidden in the equations, but nobody saw it until it was revealed in the Oppenheimer-Snyder solution. On a much more humble level, my own activities as a theoretical physicist have a similar quality. When I am working, I feel myself to be practicing a craft rather than following a method. When I did my most important piece of work as a young man, putting together the ideas of Tomonaga, Schwinger, and Feynman to obtain a simplified version of quantum electrodynamics, I had consciously in mind a metaphor to describe what I was doing. The metaphor was bridge-building. Tomonaga and Schwinger had built solid foundations on one side of a river of ignorance, Feynman had built solid foundations on the other side, and my job was to design and build the cantilevers reaching out over the water until they met in the middle. The metaphor was a good one. The bridge that I built is still serviceable and still carrying traffic forty years later. The same metaphor describes well the greater work of unification achieved by Weinberg and Salam when they bridged the gap between electrodynamics and the weak interactions. In each case, after the work of unification is done, the whole stands higher than the parts.
In recent years there has been great dispute among historians of science, some believing that science is driven by social forces, others believing that science transcends social forces and is driven by its own internal logic and by the objective facts of nature. Historians of the first group write social history, those of the second group write intellectual history. Since I believe that scientists should be artists and rebels, obeying their own instincts rather than social demands or philosophical principles, I do not fully agree with either view of history. Nevertheless, scientists should pay attention to the historians. We have much to learn, especially from the social historians.
Many years ago, when I was in Zürich, I went to see the play The Physicistsby the Swiss playwright Dürrenmatt. The characters in the play are grotesque caricatures, wearing the costumes and using the names of Newton, Einstein, and Möbius. The action takes place in a lunatic asylum where the physicists are patients. In the first act they entertain themselves by murdering their nurses, and in the second act they are revealed to be secret agents in the pay of rival intelligence services. I found the play amusing but at the same time irritating. These absurd creatures on the stage had no resemblance at all to any real physicist. I complained about the unreality of the characters to my friend Markus Fierz, a well-known Swiss physicist, who came with me to the play. “But don’t you see?” said Fierz, “The whole point of the play is to show us how we look to the rest of the human race.”
Fierz was right. The image of noble and virtuous dedication to truth, the image that scientists have traditionally presented to the public, is no longer credible. The public, having found out that the traditional image of the scientist as a secular saint is false, has gone to the opposite extreme and imagines us to be irresponsible devils playing with human lives. Dürrenmatt has held up the mirror to us and has shown us the image of ourselves as the public sees us. It is our task now to dispel these fantasies with facts, showing to the public that scientists are neither saints nor devils but human beings sharing the common weaknesses of our species.
Historians who believe in the transcendence of science have portrayed scientists as living in a transcendent world of the intellect, superior to the transient, corruptible, mundane realities of the social world. Any scientist who claims to follow such exalted ideals is easily held up to ridicule as a pious fraud. We all know that scientists, like television evangelists and politicians, are not immune to the corrupting influences of power and money. Much of the history of science, like the history of religion, is a history of struggles driven by power and money. And yet this is not the whole story. Genuine saints occasionally play an important role, both in religion and in science. Einstein was an important figure in the history of science, and he was a firm believer in transcendence. For Einstein, science as a way of escape from mundane reality was no pretense. For many scientists less divinely gifted than Einstein, the chief reward for being a scientist is not the power and the money but the chance of catching a glimpse of the transcendent beauty of nature.
Both in science and in history there is room for a variety of styles and purposes. There is no necessary contradiction between the transcendence of science and the realities of social history. One may believe that in science nature will ultimately have the last word, and still recognize an enormous role for human vainglory and viciousness in the practice of science before the last word is spoken. One may believe that the historian’s job is to expose the hidden influences of power and money, and still recognize that the laws of nature cannot be bent and cannot be corrupted by power and money. To my mind, the history of science is most illuminating when the frailties of human actors are put into juxtaposition with the transcendence of nature’s laws.
Francis Crick is one of the great scientists of our century. He has recently published his personal narrative of the microbiological revolution that he helped to bring about, with a title borrowed from Keats, What Mad Pursuit. One of the most illuminating passages in his account compares two discoveries in which he was involved. One was the discovery of the double-helix structure ofDNA, the other was the discovery of the triple-helix structure of the collagen molecule. Both molecules are biologically important, DNA being the carrier of genetic information, collagen being the protein that holds human bodies together. The two discoveries involved similar scientific techniques and aroused similar competitive passions in the scientists racing to be the first to find the structure.
Crick says that the two discoveries caused him equal excitement and equal pleasure at the time he was working on them. From the point of view of a historian who believes that science is a purely social construction, the two discoveries should have been equally significant. But in history as Crick experienced it, the two helixes were not equal. The double helix became the driving force of a new science, while the triple helix remained a footnote of interest only to specialists. Crick asks the question, how the different fates of the two helixes are to be explained. He answers the question by saying that human and social influences cannot explain the difference, that only the transcendent beauty of the double-helix structure and its genetic function can explain the difference. Nature herself, and not the scientist, decided what was important. In the history of the double helix, transcendence was real. Crick gives himself the credit for choosing an important problem to work on, but, he says, only Nature herself could tell how transcendentally important it would turn out to be.
My message is that science is a human activity, and the best way to understand it is to understand the individual human beings who practice it. Science is an art form and not a philosophical method. The great advances in science usually result from new tools rather than from new doctrines. If we try to squeeze science into a single philosophical viewpoint such as reductionism, we are like Procrustes chopping off the feet of his guests when they do not fit onto his bed. Science flourishes best when it uses freely all the tools at hand, unconstrained by preconceived notions of what science ought to be. Every time we introduce a new tool, it always leads to new and unexpected discoveries, because Nature’s imagination is richer than ours.

“So I ran to the library. There were hundreds of papers on protein formation and almost none on protein degradation. It was obvious that protein degradation was important. It was also obvious that nobody much cared about it. So here was perfect territory for a curious young scientist.”

If one is given a puzzle to solve one will usually, if it proves to be difficult, ask the owner whether it can be done. Such a question should have a quite definite answer, yes or no, at any rate provided the rules describing what you are allowed to do are perfectly clear. Of course the owner of the puzzle may not know the answer. One might equally ask, ‘How can one tell whether a puzzle is solvable?’, but this cannot be answered so straightforwardly. The fact of the matter is that there is no systematic method of testing puzzles to see whether they are solvable or not. If by this one meant merely that nobody had ever yet found a test which could be applied to any puzzle, there would be nothing at all remarkable in the statement. It would have been a great achievement to have invented such a test, so we can hardly be surprised that it has never been done. But it is not merely that the test has never been found. It has been proved that no such test ever can be found.

– AM Turing –

Copeland, B. Jack.; Copeland, B. Jack (2004-09-09). The Essential Turing (Page 582). Clarendon Press.

‘The “computable” numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique. I hope shortly to give an account of the relations of the computable numbers, functions, and so forth to one another. This will include a development of the theory of functions of a real variable expressed in terms of computable numbers. According to my definition, a number is computable if its decimal can be written down by a machine.’

Alan Turing
The Graduate College
Princeton University
New Jersey, U.S.A.

Bach = 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + 1 / ( 1 + … ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) …

If you make easy things hard,
then hard things become easy.

map (k1,v1) → list(k2,v2)
reduce (k2,list(v2)) → list(v2)

I feel the earth move.
I feel the tumbling down,
the tumbling down.

“Discord could be like sunlight,
which is plentiful
but has to be harnessed in a certain
way to be useful.
We need to identify
what that way is.”

Footnotes and References

3 To avoid the problem with what happens when there is no satisfying assignment, Aaronson proposes you instead kill yourself with probability 1- 2-2n if you do not guess a satisfying assignment. Then if you survive without having guessed an assignment, it is almost certain that there is no satisfying assignment. This step is not strictly necessary, however. There would always be some reality in which you somehow avoided killing yourself; perhaps your suicide machine of choice failed to operate in some highly improbable way. Of course, for the technique to work at all, such a failure must be very improbable.

[1] Scott Aaronson. “NP-complete Problems and Physical Reality.” SIGACT News 36:1 (2005), 30-52.

[54] Hugh Everett. “Relative State Formulation of Quantum Mechanics.” Rev. Mod. Phys. 29 (1957), 454-462.

[81] John Gribbin. “Doomsday Device.” Analog Science Fiction/Science Fact 105:2 (1985), 120-125.

[162] Max Tegmark and Nick Bostrom. “Is a Doomsday Catastrophe Likely?” Nature 438 (2005), 754.

**David Deutsch obviously has much to say on these subjects, especially w.r.t. Hugh Everett and Max Tegmark.**

“A mathematician is a machine that turns coffee into theorems,” Erdős used to say, quoting Rényi.

Compare the situation to the notion of a nondeterministic Turing machine. The conventional view is that a nondeterministic Turing machine is allowed to “guess” the right choice at each step of the computation. There is no question or issue of how the guess is made. Yet, one speaks of nondeterministic computations being “performed” by the machine. It is allowed access to a nonalgorithmic resource to perform its computation. Nondeterministic computers may or may not be “magical” relative to ordinary Turing machines; it is unknown whether P = NP. However, one kind of magic they definitely cannot perform is to turn finite space into infinite space. But a team-game “computer,” on the other hand, can perform this kind of magic, using only a slight generalization of the notion of nondeterminism. Whether these games can be played perfectly in the real world and thus, whether we can actually perform arbitrary computations with finite physical resources is a question of physics, not of computer science. And it is not immediately obvious that the answer must be no. Others have explored various possibilities for squeezing unusual kinds of computation out of physical reality. There have been many proposals for how to solve NP-complete problems in polynomial time; Aaronson [1] offers a good survey. One such idea, which works if one subscribes to Everett’s relative-state interpretation of quantum mechanics [54] (popularly called “many worlds”), is as follows. Say you want to solve an instance of SAT, which is NP-complete. You need to find a variable assignment that satisfies a Boolean formula with n variables. Then you can proceed as follows: guess a random variable assignment, and if it does not happen to satisfy the formula, kill yourself. Now, in the only realities you survive to experience you will have “solved” the problem in polynomial time.’ Aaronson has termed this approach “anthropic computing.” Apart from the possibly metaphysical question of whether there would indeed always be a “you” that survived this “computation,” there is the annoying practical problem that those around you would almost certainly experience your death, instead of your successful efficient computation. There is a way around this problem, however. Suppose that, instead of killing yourself, you destroy the entire universe. Then, effectively, the entire universe is cooperating in your computation, and nobody will ever experience you failing and killing yourself. A related idea was explored in the science-fiction story “Doomsday Device” by John Gribbin [81]. In that story a powerful particle accelerator seemingly fails to operate, for no good reason. Then a physicist realizes that if it were to work, it would effectively destroy the entire universe, by initiating a transition from a cosmological false-vacuum state to a lower-energy vacuum state. In fact, the accelerator has worked; the only realities the characters experience involve highly unlikely equipment failures. (Whether such a false-vacuum collapse is actually possible is an interesting question [162].) We can imagine incorporating such a particle accelerator in a computing machine. We would like to propose the term “doomsday computation” for any kind of computation in which the existence of the universe might depend on the output of the computation. Clearly, doomsday computation is a special case of anthropic computation. However, neither approach seems to offer the ability to perform arbitrary computations. Other approaches considered in [1] might do better: “time-travel computing,” which works by sending bits along closed timelike curves (CTCs), can solve PSPACE-complete problems in polynomial time. Perhaps there is some way to generalize some such “weird physics” kind of computation to enable perfect game play. The basic idea of anthropic computation seems appropriate: filter out the realities in which you lose, post-selecting worlds in which you will. But directly applied, as in the SAT example above, this only works for bounded one-player puzzles. Computing with CTCs gets you to PSPACE, which is suggestive of solving a two-player, bounded-length game, or a one-player, unbounded-length puzzle. Perhaps just one step more is all that is needed to create a perfect team-game player, and thus a physically finite, but computationally universal, computer.

Robert A. Hearn; Erik D. Demaine. Games, Puzzles, and Computation

i hbar d/dt Y = H Y

The relation between the imaginary (time) and real parts (H = real Hermitian operator) of reality, that may mix when H is in a special form, leaving four vectors

a_m = (a_i, a_4)

invariant when a_i is a real three vector (space) and a_4 = i*c*t is a purely imaginary scalar (time).


“So there’s an iron rule that just as you want to start getting worldly wisdom by asking why, why, why, in communicating with other people about everything, you want to include why, why, why. Even if it’s obvious, it’s wise to stick in the why.”

Elementary Worldly Wisdom

Science is inquiry. Nothing is ever completely understood, nothing is ever certain, no wisdom is ever excepted as sacrosanct. Art is discovery.

“i really can’t remember why we started this record, i no longer know what we were trying to do back then. i do know session after session went pear-shaped, we lost focus and almost gave up…did give up for a while. but then something happened and form started to emerge, and now i can honestly say that it’s the only sigur rós record i have listened to for pleasure in my own house after we’ve finished it.” – georg from sigur ros on valtari

We have been trying to see how far it is possible to eliminate intuition, and leave only ingenuity. We do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply. —Alan Turing, 1939

Teller assumed that I had come to ask him about the Teller-Ulam invention, and provided a lengthy account of the genesis of the hydrogen bomb, and of the fission implosion-explosion required to get the thermonuclear fuel to ignite. “The whole implosion idea—that is, that one can get densities considerably greater than normal—came from a visit from von Neumann,” he told me. “We proposed that together to Oppenheimer. He at once accepted.” With the hydrogen bomb out of the way, I mentioned that I was interested in the status of the Fermi paradox after fifty years.

“Let me ask you,” Teller interjected, in his thick Hungarian accent. “Are you uninterested in extraterrestrial intelligence? Obviously not. If you are interested, what would you look for?”

“There’s all sorts of things you can look for,” I answered. “But I think the thing not to look for is some intelligible signal.… Any civilization that is doing useful communication, any efficient transmission of information will be encoded, so it won’t be intelligible to us—it will look like noise.”

“Where would you look for that?” asked Teller.

“I don’t know.…”

“I do!”


“Globular clusters!” answered Teller. “We cannot get in touch with anybody else, because they choose to be so far away from us. In globular clusters, it is much easier for people at different places to get together. And if there is interstellar communication at all, it must be in the globular clusters.”

“That seems reasonable,” I agreed. “My own personal theory is that extraterrestrial life could be here already … and how would we necessarily know? If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there’s a digital world—a civilization that has discovered the Universal Turing Machine—for it to colonize when it gets there. And that’s why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life.”

There was a long, drawn-out pause. “Look,” Teller finally said, lowering his voice to a raspy whisper, “may I suggest that instead of explaining this, which would be hard … you write a science-fiction book about it.”

“Probably someone has,” I said.

“Probably,” answered Teller, “someone has not.”

( Chris Anderson (TED): Questions no one knows the answers to | Video on TED.com )

Von Neumann knew that the real challenge would be not building the computer, but asking the right questions, in language intelligible to the machine.

( Lists of Note )

The good news is that, as Leibniz suggested, we appear to live in the best of all possible worlds, where the computable functions make life predictable enough to be survivable, while the noncomputable functions make life (and mathematical truth) unpredictable enough to remain interesting, no matter how far computers continue to advance.

Dyson, George (2012-03-06). Turing’s Cathedral: The Origins of the Digital Universe. Random House, Inc..

CLOV (as before): How easy it is. They said to me, That’s friendship, yes, yes, no question, you’ve found it. They said to me, Here’s the place, stop, raise your head and look at all that beauty. That order! They said to me, Come now, you’re not a brute beast, think upon these things and you’ll see how all becomes clear. And simple! They said to me, What skilled attention they get, all these dying of their wounds.

HAMM: Enough!

CLOV (as before): I say to myself-sometimes, Clov, you must learn to suffer better than that if you want them to weary of punishing you -one day. I say to myself-sometimes, Clov, you must be there better than that if you want them to let you go-one day. But I feel too old, and too far, to form new habits. Good, it’ll never end, I’ll never go. (Pause.) Then one day, suddenly, it ends, it changes, I don’t understand, it dies, or it’s me, I don’t understand, that either. I ask the words that remain-sleeping, waking, morning, evening. They have nothing to say. (Pause.) I open the door of the cell and go. I am so bowed I only see my feet, if I open my eyes, and between my legs a little trail of black dust. I say to myself that the earth is extinguished, tough I never saw it lit. (Pause.) It’s easy going. (Pause.) When I fall I’ll weep for happiness. (Pause. He goes towards the door.)

Samuel Beckett. Endgame and Act Without Words

Doubt is the vestibule which all must pass before they can enter the temple of wisdom. When we are in doubt and puzzle out the truth by our own exertions, we have gained something that will stay by us and will serve us again. But if to avoid the trouble of the search we avail ourselves of the superior information of a friend, such knowledge will not remain with us; we have not bought, but borrowed it. —C. C. Colton

Winkler, Peter (2010-11-12). Mathematical Puzzles: A Connoisseur’s Collection . Taylor & Francis

Imagined Worlds
By Freeman Dyson

Chapter One: Stories

Successful technologies often begin as hobbies. Jacques Cousteau invented scuba diving because he enjoyed exploring caves. The Wright brothers invented flying as a relief from the monotony of their normal business of selling and repairing bicycles. A little earlier, the bicycle and the automobile began as recreational vehicles, as means for people of leisure to explore the countryside, before smooth roads existed to make riding and driving efficient. In all these technologies, the pioneers were spending their money and risking their lives for nothing more substantial than fun. Scuba diving is fun, flying is fun, riding bicycles and driving cars are fun, especially in the early days when nobody else is doing it. Even today, when each of these four hobbies has grown into a huge industry, when legal regulations are enforced to reduce the risks as far as possible, sport and recreation are still supplying much of the motivation for pushing the technologies ahead.

The history of flying is a good example to look at in detail for insight into the interaction of technology with human affairs, because two radically different technologies were competing for survival–in the beginning they were called heavier-than-air and lighter-than-air. The airplane and the airship were not only physically different in shape and size but also sociologically different. The airplane grew out of dreams of personal adventure. The airship grew out of dreams of empire. The image in the minds of airplane-builders was a bird. The image in the minds of airship-builders was an oceanliner.

We are lucky to have a vivid picture of the creative phases of these technologies, written by a man who was deeply involved in both and was also a gifted writer, Nevil Shute Norway. Before he became the famous novelist Nevil Shute–author of Pied Piper, A Town like Alice, On the Beach, and other wonderful stories–he was an aeronautical engineer working professionally on the design of airplanes and airships. He wrote an autobiography with the title Slide Rule, describing his life as an engineer.

Norway did not start out with any bias for airplanes and against airships. He worked on both with equal dedication, and he was particularly proud of his part in the design of the airship R100. He worked on it for six years, from the moment of conception in 1924 to the delivery in 1930, and flew on its triumphant maiden voyage in 1930, from London to Montreal and back. From a technical point of view, airships then had many advantages over airplanes, and the R100 was a technical success. But Norway saw clearly that the fate of airships and airplanes did not depend on technical factors alone. Even before he became a professional writer, he was more interested in people than in nuts and bolts. He saw and recorded the human factors that made the building of airplanes fun and made the building of airships a nightmare.

After finishing the R100, Norway started a company of his own, Airspeed Limited. It was one of the hundreds of small companies that were inventing and building and selling airplanes in the 1920s and 30s. Norway estimated that 100,000 different varieties of airplane were flown during those years. All over the world, enthusiastic inventors were selling airplanes to intrepid pilots and to fledgling airlines. Many of the pilots crashed and many of the airlines became bankrupt. Out of 100,000 types of airplane, about 100 survived to form the basis of modern aviation. The evolution of the airplane was a strictly Darwinian process in which almost all the varieties of airplane failed, just as almost all species of animal become extinct. Because of the rigorous selection, the few surviving airplanes are astonishingly reliable, economical, and safe.

The Darwinian process is ruthless, because it depends upon failure. It worked well in the evolution of airplanes because the airplanes were small, the companies that built them were small, and the costs of failure in money and lives were tolerable. Planes crashed, pilots were killed, and investors were ruined, but the scale of the losses was not large enough to halt the process of evolution. After the crash, new pilots and new investors would always appear with new dreams of glory. And so the selection process continued, weeding out the unfit, until airplanes and companies had grown so large that further weeding was officially discouraged. Norway’s company was one of the few that survived the weeding and became commercially profitable. As a result, it was bought out and became a division of De Havilland, losing the freedom to make its own decisions and take its own risks. Even before De Havilland took over the company, Norway decided that the business was no longer fun. He stopped building airplanes and started his new career as a novelist.

The evolution of airships was a different story, dominated by politicians rather than by inventors. British politicians in the 1920s were acutely aware that the century of world-wide British hegemony based upon sea power had come to an end. The British Empire was still the biggest in the world but could no longer rely on the Royal Navy to hold it together. Most of the leading politicians, both Conservative and Labor, still had dreams of empire. They were told by their military and political advisers that in the modern world air power was replacing sea power as the emblem of greatness. So they looked to air power as the wave of the future that would keep Britain on top of the world. And in this context it was natural for them to think of airships rather than airplanes as the vehicles of imperial authority. Airships were superficially like oceanliners, big and visually impressive. Airships could fly nonstop from one end of the empire to the other. Important politicians could fly in airships from remote dominions to meetings in London without being forced to neglect their domestic constituencies for a month. In contrast, airplanes were small, noisy, and ugly, altogether unworthy of such a lofty purpose. Airplanes at that time could not routinely fly over oceans. They could not stay aloft for long and were everywhere dependent on local bases. Airplanes were useful for fighting local battles, but not for administering a worldwide empire.

One of the politicians most obsessed with airships was the Labor Peer Lord Thompson, Secretary of State for Air in the Labor governments of 1924 and 1929. Lord Thompson was the driving force behind the project to build the R101 airship at the government-owned Royal Airship Works at Cardington. Being a socialist as well as an imperialist, he insisted that the government factory get the job. But as a compromise to keep the Conservative opposition happy, he arranged for a sister ship, the R100, to be built at the same time by the private firm Vickers Limited. The R101 and R100 were to be the flagships of the British Empire in the new era. The R101, being larger, would fly nonstop from England to India and perhaps later to Australia. The R100, a more modest enterprise, would provide regular service over the Atlantic between England and Canada. Norway, from his position in the team of engineers designing the R100, had a front-seat view of the fate of both airships.

The R101 project was from the beginning driven by ideology rather than by common sense. At all costs, the R101 had to be the largest airship in the world, and at all costs it had to be ready to fly to India by a fixed date in October 1930, when Lord Thompson himself would embark on its maiden voyage to Karachi and back, returning just in time to attend an Imperial Conference in London. His dramatic arrival at the conference by airship, bearing fresh flowers from India, would demonstrate to an admiring world the greatness of Britain and the Empire, and incidentally demonstrate the superiority of socialist industry and of Lord Thompson himself. The huge size and the fixed date were a fatal combination. The technical problems of sealing enormous gasbags so that they should not leak were never solved. There was no time to give the ship adequate shake-down trials before the voyage to India. It finally took off on its maiden voyage, soaking wet in foul weather, with Lord Thompson and his several thousand pounds of lordly baggage on board. The ship had barely enough lift to rise above its mooring-mast. Eight hours later it crashed and burned on a field in northern France. Of the fifty-four people on board, six survived. Lord Thompson was not among them.

Meanwhile, the R100, with Norway’s help, had been built in a more reasonable manner. Its gasbags did not leak, and it had an adequate margin of lift to carry its designed pay-load. The R100 completed its maiden voyage to Montreal and back without disaster, seven weeks before the R101 left England. But Norway found the voyage far from reassuring. He reports that the R100 was violently tossed around in a local thunderstorm over Canada and was lucky to have avoided being torn apart. He did not judge it safe enough for regular passenger service. The question whether it was safe enough became moot after the R101 disaster. After one such disaster, no passengers would be likely to volunteer for another. The R100 was quietly dismantled and the pieces sold for scrap. The era of imperial airships had come to an end.

The announced purpose of the R100 was to provide a reliable passenger service between England and Canada, arriving and leaving once a week. After the airship failed, Lord Cunard, the owner of the Cunard shipping company, asked his engineers what it would take to provide a weekly service across the Atlantic using only two oceanliners. At that time it took seven or eight days for a ship to cross the Atlantic, so that a weekly service needed at least three ships. To do it with two ships would require crossing in five days, with two days margin for bad weather, loading, and unloading. The Cunard engineers designed the Queen Mary and the Queen Elizabeth to cross in five days. To do this economically, because of the way wave-drag scales with speed and size, the two ships had to be substantially larger than other oceanliners. Lord Cunard felt confident that the business of transporting passengers by ship could remain profitable for a few more decades, and he ordered the ships to be built.

In due course, after the interruption caused by the second world war, they were carrying passengers profitably across the ocean and incidentally breaking speed records. The British public was proud of these ships, which regularly won the famous Blue Ribbon for the fastest Atlantic crossing. The public imagined that the ships were designed to win the Blue Ribbon, but Lord Cunard said the public misunderstood the purpose of the ships completely. He said his purpose was always to build the smallest and slowest ships that could do a regular weekly service. It was just an unfortunate accident that to do this job you had to break records. The ships continued their weekly sailings profitably for many years, until the Boeing 707 put them out of business.

While oceanliners were still enjoying their heyday, before the triumph of the Boeing 707, another tragedy of ideologically driven technology occurred. This was the tragedy of the Comet jetliner. During World War II the De Havilland company had built bombers and jet fighters and acquired an appetite for bigger things. After the war, the company went ahead with the design of the Comet, a commercial jet that could fly twice as fast as the propeller-driven transport planes of that era. At the same time, the British government established the British Overseas Airways Corporation, a state-owned monopoly with responsibility for long-distance air routes. The Empire was disintegrating rapidly, but enough of it remained to inspire the planners at BOAC with new dreams of glory. Their dream was to deploy a fleet of Comets on the Empire routes that BOAC controlled, from London south to Africa and east to India and Australia.

The dream was seductive because it meant that Britain would move into the jet age five years ahead of the slow-moving Americans. While the Boeing Company hesitated, the Comets would be flying. The Comets would display to the world the superiority of British technology, and incidentally demonstrate that the Empire, now renamed the Commonwealth, was still alive. After the BOAC Comets had shown the way, other airlines all over the world would be placing orders with De Havilland. The dreams that inspired the Comet were the same dreams that inspired the R101 twenty years earlier. The heirs of Lord Thompson had learned little from his fate.

The Comet enterprise made the same mistake as the R101, pushing ahead into a difficult and demanding technology with a politically dictated time-table. The decision to rush the Comet into service in 1952 was driven by the political imperative of staying five years ahead of the Americans. One man foresaw the disaster that was coming. Nevil Shute, no longer an aeronautical engineer but a well-informed bystander, published in 1948 a novel with the title No Highway, which described how political pressures could push an unsafe airplane into service. The novel tells the story of a disaster that is remarkably similar to the Comet disasters that happened four years later.

The fatal flaw of the Comet was a concentration of stress at the corners of the windows. The stress caused the metal skin of the plane to crack and tear open. The cracking occurred only at high altitudes when the plane was fully pressurized. The result was a disintegration of the plane and strewing of wreckage over wide areas, leaving no clear evidence of the cause. Two planes were destroyed in this way, one over India and one over Africa, killing everybody on board. After the second crash, the Comets stopped flying. For five years no jetliners flew, until the Americans were ready with their reliable and thoroughly tested Boeing 707. It took a hundred deaths to stop the Comets from flying, twice as many as it took to stop the airships. If the Secretary of State for Air had been on board the first Comet when it crashed, the second crash might not have been necessary.

Nevil Shute explains how it happened that the R101 and the Comets were allowed to carry passengers without adequate flight-testing. It happened because of a clash of two cultures, the culture of politics and the culture of engineering. Politicians were making crucial decisions about technical matters which they did not understand. The job of a senior politician is to make decisions. Political decisions are often made on the basis of inadequate knowledge, and usually without doing much harm. In the culture of politics, a leader gains respect by saying: “The buck stops here.” To take a chance of making a bad decision is better than to be indecisive. The culture of engineering is different. An engineer gains respect by saying: “Better safe than sorry.” Engineers are trained to look for weak points in a design–to warn of potential disaster. When politicians are in charge of an engineering venture, the two cultures clash. When the venture involves machines that fly in the air, a clash tends to result in a crash.

Aviation is the branch of engineering that is least forgiving of mistakes. But from a wider point of view, unforgivingness may be a virtue. In the long view of history, the victims of the R101 and the Comets did not die in vain. They left as the legacy of their tragedy the extraordinarily safe and reliable airplanes that now fly every day across continents and oceans all over the world. Without the harsh lessons of disaster and death, the modern jetliner would not have evolved.

My friend Albert Hirschman has found other places where unforgivingness is a virtue. He is an economist who has spent much of his life studying Latin American societies and giving advice to their governments. He has also given advice to newly independent countries in Africa. He is often asked by the leaders of poor countries, “Should we put our limited resources into roads or into airlines?” When this question is asked, the natural impulse of an economist is to say “roads,” because the money spent on roads provides jobs for local people, and the roads benefit all classes of society. In contrast, the building of a national airline requires the import of foreign technology, and the airline benefits only the minority of citizens who can afford to use it. Nevertheless, long experience in Africa and Latin America has taught Hirschman that “roads” is usually the wrong answer. In the real world, roads have several disadvantages. The money assigned to road-building tends to fall into the hands of corrupt local officials. Roads are easier to build than to maintain. And when, as usually happens, the new roads decay after a few years, the decay is gradual and does not create a major scandal. The end-result of road-building is that life continues as before. The economist who said “roads” has achieved little except a small increase in the wealth and power of local officials.

Contrast this with the real-world effect of building a national airline. After the money is spent, the country is left with some expensive airplanes, some expensive airports, and some expensive modern equipment. The foreign technicians have left and local people must be trained to operate the system. Unlike roads, airplanes do not decay gracefully. A crash of an airliner is a highly visible event and brings unacceptable loss of prestige to the rulers of the country. The victims tend to be people of wealth and influence, and their deaths do not pass unnoticed. The rulers have no choice. Once they own an airline, they are compelled to see to it that the airline is competently run. They are forced to create a cadre of highly motivated people who maintain the machines, come to work on time, and take pride in their technical skills. As a result, the airline brings to the country indirect benefits that are larger than its direct economic value. It creates a substantial body of citizens accustomed to strict industrial discipline and imbued with a modern work ethic. And these citizens will in time find other useful things to do with their skills besides taking care of airplanes. In this paradoxical way, the unforgivingness of aviation makes it the best school for teaching a traditional society how to modernize.

This is not the first time that an unforgiving technology has transformed the world and forced traditional societies to change. The role of aviation today is similar to the role of sailing ships in the preindustrial world. King Henry VIII of England, the most brutal and most intelligent of English monarchs, destroyer of monasteries and founder of colleges, murderer of wives and composer of madrigals, for whose soul regular prayers are still said at Trinity College Cambridge in gratitude for his largesse, understood that the most effective tool for modernizing England was the creation of a Royal Navy. It was not by accident that the industrial revolution of the eighteenth century began in England, in the island where daily life and economics had been dominated for 300 years by the culture of sailing ships. When the young Tsar Peter the Great of Russia, a kindred spirit to Henry, decided that the time had come to modernize the Russian empire, he prepared himself for the job by going to work as an apprentice in a shipyard.

The R101 and Comet tragedies are examples of the baleful effects of ideology, the ideology in those cases being old-fashioned British imperialism. Today, the British Empire is ancient history, and its ideology is dead. But technologies driven by ideology are likely to run into trouble, even when the ideology is not so outmoded. Another powerful ideology that ran into trouble is nuclear energy. All over the world, after the end of World War II, the ideology of nuclear energy flourished, driven by an intense desire to create something peaceful and useful out of the ruins of Hiroshima and Nagasaki. Scientists and politicians and industrial leaders were equally bewitched by this vision, that the great new force of nature that killed and maimed in war would now make deserts bloom in peace. Nuclear energy was so strange and powerful that it looked like magic. It was easy to believe that this magic could bring wealth and prosperity to poor people all over the earth. So it happened that in all large countries and in many small ones, in democracies and dictatorships, in communist and capitalist societies alike, Atomic Energy Authorities were created to oversee the miracles that nuclear energy was expected to perform. Huge funds were poured into nuclear laboratories in the confident belief that these were sound investments for the future.

I visited Harwell, the main British nuclear research establishment, during the early days of nuclear enthusiasm. The first director of Harwell was Sir John Cockcroft, a first-rate scientist and an honest public servant. I walked around the site with Cockcroft, and we looked up at the massive electric power lines running out of the plant, over our heads and away into the distance. Cockcroft remarked, “The public imagines that the electricity is flowing out of this place into the national grid. When I tell them that it is all flowing the other way, they don’t believe me.”

There was nothing wrong, and there is still nothing wrong, with using nuclear energy to make electricity. But the rules of the game must be fair, so that nuclear energy competes with other sources of energy and is allowed to fail if it does badly. So long as it is allowed to fail, nuclear energy can do no great harm. But the characteristic feature of an ideologically driven technology is that it is not allowed to fail. And that is why nuclear energy got into trouble. The ideology said that nuclear energy must win. The promoters of nuclear energy believed as a matter of faith that it would be safe and clean and cheap and a blessing to humanity. When evidence to the contrary emerged, the promoters found ways to ignore the evidence. They wrote the rules of the game so that nuclear energy could not lose. The rules for cost-accounting were written so that the cost of nuclear electricity did not include the huge public investments that had been made to develop the technology and to manufacture the fuel. The rules for reactor safety were written so that the type of light-water reactor originally developed by the United States Navy for propelling submarines was by definition safe. The rules for environmental cleanliness were written so that the ultimate disposal of spent fuel and worn-out machinery was left out of consideration. With the rules so written, nuclear energy confirmed the beliefs of its promoters. According to these rules, nuclear energy was indeed cheap and clean and safe.

The people who wrote the rules did not intend to deceive the public. They deceived themselves, and then fell into a habit of suppressing evidence that contradicted their firmly held beliefs. In the end, the ideology of nuclear energy collapsed because the technology that was not allowed to fail was obviously failing. In spite of the government subsidies, nuclear electricity did not become significantly cheaper than electricity made by burning coal and oil. In spite of the declared safety of light-water reactors, accidents occasionally happened. In spite of the environmental advantages of nuclear power plants, disposal of waste fuel remained an unsolved problem. The public, in the end, reacted harshly against nuclear power because obvious facts contradicted the claims of the promoters.

When a technology is allowed to fail in competition with other technologies, the failure is a part of the normal Darwinian process of evolution, leading to improvements and possible later success. When a technology is not allowed to fail, and still it fails, the failure is far more damaging. If nuclear power had been allowed to fail at the beginning, it might well have evolved by now into a better technology which the public would trust and support. There is nothing in the laws of nature that stops us from building better nuclear power plants. We are stopped by deep and justified public distrust. The public distrusts the experts because they claimed to be infallible. The public knows that human beings are fallible. Only people blinded by ideology fall into the trap of believing in their own infallibility.

The tragedy of nuclear fission energy is now almost at an end, so far as the United States is concerned. Nobody wants to build any new fission power plants. But another tragedy is still being played out, the tragedy of nuclear fusion. The promoters of fusion are making the same mistakes that the promoters of fission made thirty years earlier. The promoters are no longer experimenting with a variety of fusion schemes in order to evolve a machine that might win in the marketplace. They long ago decided to concentrate their main effort upon a single device, the Tokamak, which is declared by ideological fiat to be the energy producer for the twenty-first century. The Tokamak was invented in Russia, and its inventors gave it a name that transliterates euphoniously into other languages. All the countries with serious programs of fusion research have built Tokamaks. One of the largest and most expensive is in Princeton. To me it looks like a plumber’s nightmare, a dense conglomeration of pipes and coils with no space for anybody to go in and fix it when it needs repairs. But the people who built it believe sincerely that it is an answer to human needs. The various national fusion programs are supposed to converge upon a huge international Tokamak, costing many billions of dollars, which will be the prototype for the fusion power producers of the future. The usual claims are made, that fusion power will be safe and clean, although even the promoters are no longer saying that it will be cheap. The existing fusion programs have stopped the evolution of a new technology that might actually fulfill the hopes of the promoters. What the world needs is a small, compact, flexible fusion technology that could make electricity where and when it is needed. The existing fusion program is leading to a huge source of centralized power, at a price that nobody except a government can afford. It is likely that the existing fusion program will sooner or later collapse as the fission program collapsed, and we can only hope that some more useful form of fusion technology will rise from the wreckage.

Appendix F – Personal observations on the reliability of the Shuttle

Having settled in the United States, Bethe went to the Washington Conferences on Theoretical Physics every year from 1935 until 1937. He decided not to take part in 1938, because the subject was energy production in stars and “he wasn’t interested in that problem”. At the urging of fellow émigré Edward Teller, he finally went, and what he heard led him to discover the CNO cycle in stars, in which reactions between protons and nuclei convert carbon sequentially into nitrogen and oxygen and back to carbon, liberating energy. And he identified the dominant processes that power the Sun. At first, the editor of Physical Review was not enthused by the CNO article. The resultant delay in publishing proved fortunate for Bethe: it enabled him to win the New York Academy of Science’s US$500 prize for an unpublished work on stellar energy. The same work was later instrumental in him winning the Nobel Prize in Physics.

“Computer so-called science actually has a lot in common with magic.” – Hal Abelson

Biological information-processing systems operate on completely different principles from those with which engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. I have shown that this advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. I have argued that this approach requires adaptive techniques to correct for differences between nominally identical components, and that this adaptive capability leads naturally to systems that learn about their environment.
– Carver Mead –

One pill makes you larger
And one pill makes you small
And the ones that mother gives you
Don’t do anything at all
Go ask Alice
When she’s ten feet tall
And if you go chasing rabbits
And you know you’re going to fall
Tell ’em a hookah smoking caterpillar
Has given you the call to
Call Alice
When she was just small
When the men on the chessboard
Get up and tell you where to go
And you’ve just had some kind of mushroom
And your mind is moving
Go ask Alice
I think she’ll know
When logic and proportion
Have fallen sloppy dead
And the White Knight is talking backwards
And the Red Queen’s “off with her head!”
Remember what the doormouse said;
Feed your head”


“Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

nothing |ˈnəTHiNG|pronounnot anything; no single thing: I said nothing | there’s nothing you can do | they found nothing wrong.• something of no importance or concern: “What are you laughing at?” “Oh, nothing, sir.” | they are nothing to him | [ as noun ] : no longer could we be treated as nothings.• (in calculations) no amount; zero.adjective [ attrib. ] informalhaving no prospect of progress; of no value: he had a series of nothing jobs.adverbnot at all: she cares nothing for others | he looks nothing like the others.• [ postpositive ] informal used to contradict something emphatically: “This is a surprise.” “Surprise nothing.”ORIGIN Old English nān thing (see no,thing) .

begin |biˈgin|verb ( begins, beginning ; past began |-ˈgan|; past participle begun |-ˈgən| )1 [ with obj. ] start; perform or undergo the first part of (an action or activity):theorists have just begun to address these complex questions | she began a double life |(begin to do/doing something) : it was beginning to snow | [ no obj. ] : shebegan by rewriting the syllabus.• [ no obj. ] come into being or have its starting point at a certain time or place: the ground campaign had begun | the story begins with the death of her senile father | the tour begins at the active Poas Volcano.• [ no obj. ] (of a person) hold a specific position or role before holding any other: he began as a drummer.• [ no obj. ] (of a thing) originate: Watts Lake began as a marine inlet.• [ no obj. ] (begin with) have as a first element: words beginning with a vowel.• [ no obj. ] (begin on/upon) set to work at: Picasso began on a great canvas.• [ with direct speech ] start speaking by saying: “I’ve got to go to the hotel,” she began.• [ no obj. ] (begin at) (of an article) cost at least (a specified amount): rooms begin at $139.2 [ no obj. with negative ] informal not have any chance or likelihood of doing a specified thing: circuitry that Karen could not begin to comprehend.ORIGIN Old English beginnan, of Germanic origin; related to Dutch and German beginnen .

propaganda |ˌpräpəˈgandə|noun1 chiefly derogatory information, esp. of a biased or misleading nature, used to promote or publicize a particular political cause or point of view: he was charged with distributing enemy propaganda.• the dissemination of such information as a political strategy: the party’s leaders believed that a long period of education and propaganda would be necessary.2 ( Propaganda )a committee of cardinals of the Roman Catholic Churchresponsible for foreign missions, founded in 1622 by Pope Gregory XV.ORIGIN Italian, from modern Latin congregatio de propaganda fide‘congregation for propagation of the faith’ ( sense 2). Sense 1 dates from the early 20th cent.

information |ˌinfərˈmāSHən|noun1 facts provided or learned about something or someone: a vital piece of information.• Law a formal criminal charge lodged with a court or magistrate by aprosecutor without the aid of a grand jury: the tenant may lay an information against his landlord.2 what is conveyed or represented by a particular arrangement or sequence of things: genetically transmitted information.• Computing data as processed, stored, or transmitted by a computer.• (in information theory) a mathematical quantity expressing the probability of occurrence of a particular sequence of symbols, impulses, etc., as contrasted with that of alternative sequences.DERIVATIVESinformational |-SHənl|adjective,informationally |-SHənl-ē|adverbORIGIN late Middle English (also in the sense ‘formation of the mind, teaching’), via Old French from Latin informatio(n-), from the verb informare(see inform) .

conjecture |kənˈjekCHər|nounan opinion or conclusion formed on the basis of incomplete information:conjectures about the newcomer were many and varied | the purpose of the opening in the wall is open to conjecture.• an unproven mathematical or scientific theorem: the Goldbach conjecture.• (in textual criticism) the suggestion or reconstruction of a reading of a text not present in the original source.verb [ with obj. ]form an opinion or supposition about (something) on the basis of incomplete information: he conjectured the existence of an otherwise unknown feature | many conjectured that she had a second husband in mind.• (in textual criticism) propose (a reading).DERIVATIVESconjecturable adjectiveORIGIN late Middle English (in the senses ‘to divine’ and ‘divination’): fromOld French, or from Latin conjectura, from conicere ‘put together in thought,’from con- ‘together’ + jacere ‘throw.’

criticism |ˈkritəˌsizəm|noun1 the expression of disapproval of someone or something based on perceived faults or mistakes: he received a lot of criticism | he ignored the criticisms of his friends.2 the analysis and judgment of the merits and faults of a literary or artistic work:alternative methods of criticism supported by well-developed literary theories.• the scholarly investigation of literary or historical texts to determine their origin or intended form.ORIGIN early 17th cent.: from critic or Latin criticus + -ism.

test 1 |test|noun1 a procedure intended to establish the quality, performance, or reliability of something, esp. before it is taken into widespread use: no sparking was visibleduring the tests.• a short written or spoken examination of a person’s proficiency or knowledge:a spelling test.• an event or situation that reveals the strength or quality of someone or something by putting them under strain: this is the first serious test of the peace agreement.• an examination of part of the body or a body fluid for medical purposes, esp. by means of a chemical or mechanical procedure rather than simple inspection: a test for HIV | eye tests.• Chemistry a procedure employed to identify a substance or to reveal the presence or absence of a constituent within a substance.• the result of a medical examination or analytical procedure: a positive test for protein.• a means of establishing whether an action, item, or situation is an instance of a specified quality, esp. one held to be undesirable: a statutory test of obscenity.2 Metallurgy a movable hearth in a reverberating furnace, used for separating gold or silver from lead.verb [ with obj. ]take measures to check the quality, performance, or reliability of (something), esp. before putting it into widespread use or practice: this range has not been tested on animals | (as noun testing) : the testing and developing of prototypes | figurative : a useful way to test out ideas before implementation.• reveal the strengths or capabilities of (someone or something) by putting them under strain: such behavior would severely test any marriage.• give (someone) a short written or oral examination of their proficiency or knowledge: all children are tested at eleven.• judge or measure (someone’s proficiency or knowledge) by means of such an examination.• carry out a medical test on (a person, a part of the body, or a body fluid).• [ no obj. ] produce a specified result in a medical test, esp. a drug test or AIDS test: he tested positive for steroids after the race.• Chemistry examine (a substance) by means of a reagent.• touch or taste (something) to check that it is acceptable before proceeding further: she tested the water with the tip of her elbow.DERIVATIVEStestability |ˌtestəˈbilitē|noun,testable adjective,testee |-ˈtē|noun ORIGIN late Middle English (denoting a cupel used to treat gold or silver alloys or ore): via Old French from Latin testu, testum ‘earthen pot,’ variant of testa‘jug, shell.’ Compare with test2. The verb dates from the early 17th cent.

science |ˈsīəns|noun the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment: the world of science and technology.• a particular area of this: veterinary science | the agricultural sciences.• a systematically organized body of knowledge on a particular subject: the science of criminology.• archaic knowledge of any kind.ORIGIN Middle English (denoting knowledge): from Old French, from Latinscientia, from scire ‘know.’

knowledge |ˈnälij|noun1 facts, information, and skills acquired by a person through experience oreducation; the theoretical or practical understanding of a subject: a thirst for knowledge | her considerable knowledge of antiques.• what is known in a particular field or in total; facts and information: the transmission of knowledge.• Philosophy true, justified belief; certain understanding, as opposed to opinion.2 awareness or familiarity gained by experience of a fact or situation: the program had been developed without his knowledge | he denied all knowledge of the overnight incidents.ORIGIN Middle English (originally as a verb in the sense ‘acknowledge, recognize,’ later as a noun): from an Old English compound based on cnāwan(see know) .

infinity |inˈfinitē|noun ( pl. infinities )1 the state or quality of being infinite: the infinity of space.• an infinite or very great number or amount: an infinity of combinations.• a point in space or time that is or seems infinitely distant: the lawns stretched into infinity.2 Mathematics a number greater than any assignable quantity or countable number (symbol ∞).ORIGIN late Middle English: from Old French infinite or Latin infinitas, frominfinitus (see infinite) .






Nobel Lecture

Nobel Lecture, December 8, 1993
The Polymerase Chain Reaction by Kary B. Mullis
In 1944 Erwin Schroedinger, stimulated intellectually by Max Delbrück, published a little book called What is Life? It was an inspiration to the first of the molecular biologists, and has been, along with Delbrück himself, credited for directing the research during the next decade that solved the mystery of how “like begat like.”
Max was awarded this Prize in 1969, and rejoicing in it, he also lamented that the work for which he was honored before all the peoples of the world was not something which he felt he could share with more than a handful. Samuel Beckett‘s contributions in literature, being honored at the same time, seemed to Max somehow universally accessible to anyone. But not his. In his lecture here Max imagined his imprisonment in an ivory tower of science.
“The books of the great scientists,” he said, “are gathering dust on the shelves of learned libraries. And rightly so. The scientist addresses an infinitesimal audience of fellow composers. His message is not devoid of universality but it’s universality is disembodied and anonymous. While the artist’s communication is linked forever with it’s original form, that of the scientist is modified, amplified, fused with the ideas and results of others, and melts into the stream of knowledge and ideas which forms our culture. The scientist has in common with the artist only this: that he can find no better retreat from the world than his work and also no stronger link with his world than his work.”
Well, I like to listen to the wisdom of Max Delbrück. Like my other historical hero, Richard Feynman, who also passed through here, Max had a way of seeing directly into the core of things and clarifying it for the rest of us.
But I am not convinced with Max that the joy of scientific creation must remain completely mysterious and unexplainable, locked away from all but a few esoterically informed colleagues. I lean toward Feynman in this matter. I think Feynman would have said, if you can understand it, you can explain it.
So I’m going to try to explain how it was that I invented the polymerase chain reaction. There’s a bit of it that will not easily translate into normal language. If that part weren’t of some interest to more than a handful of people here, I would just leave it out. What I will do instead is let you know when we get to that and also when we are done with it. Don’t trouble yourself over it. It’s esoteric and not crucial. I think you can understand what it felt like to invent PCR without following the details.
In 1953, when Jim Watson and Francis Crick published the structure of DNA, Schroedinger’s little book and I were eight years old. I was too young to notice that mankind had finally understood how it might be that “like begat like.” The book had been reprinted three times. I was living in Columbia S.C., where no one noticed that we didn’t have a copy. But my home was a few blocks away from an undeveloped wooded area with a creek, possums, racoons, poisonous snakes, dragons, and a railroad track. We didn’t need a copy. It was a wilderness for me and my brothers, an unknown and unregimented place to grow up. And if we got bored of the earth, we could descend into the network of storm drains under the city. We learned our way around that dark, subterranean labyrinth. It always frightened us. And we always loved it.
By the time Watson and Crick were being honored here in Stockholm in 1962, I had been designing rockets with my adolescent companions for three years. For fuel, we discovered that a mixture of potassium nitrate and sugar could be very carefully melted over a charcoal stove and poured into a metal tube in a particular way with remarkable results. The tube grew larger with our successive experiments until it was about four feet long. My mother grew more cautious and often her head would appear out of an upstairs window and she would say things that were not encouraging. The sugar was reluctantly furnished from her own kitchen, and the potassium nitrate we purchased from the local druggist.
Back then in South Carolina young boys seeking chemicals were not immediately suspect. We could even buy dynamite fuse from the hardware with no questions asked. This was good, because we were spared from early extinction on one occasion when our rocket exploded on the launch pad, by the very reliable, slowly burning dynamite fuses we could employ, coupled with our ability to run like the wind once the fuse had been lit. Our fuses were in fact much improved over those which Alfred Nobel must have used when he was frightening his own mother. In one of our last experiments before we became so interested in the maturing young women around us that we would not think deeply about rocket fuels for another ten years, we blasted a frog a mile into the air and got him back alive. In another, we inadvertently frightened an airline pilot, who was preparing to land a DC-3 at Columbia airport. Our mistake.
At Dreher High School, we were allowed free, unsupervised access to the chemistry lab. We spent many an afternoon there tinkering. No one got hurt and no lawsuits resulted. They wouldn’t let us in there now. Today, we would be thought of as a menace to society. If I’m not mistaken, Alfred Nobel for a time was not allowed to practice his black art on Swedish soil. Sweden, of course, was then and still is a bit ahead of the United States in these matters.
I never tired of tinkering in labs. During the summer breaks from Georgia Tech, Al Montgomery and I built an organic synthesis lab in an old chicken house on the edge of town where we made research chemicals to sell. Most of them were noxious or either explosive. No one else wanted to make them, somebody wanted them, and so their production became our domain. We suffered no boredom and no boss. We made enough money to buy new equipment. Max Gergel, who ran Columbia Organic Chemicals Company, and who was an unusually nice man, encouraged us and bought most of our products, which he resold. There were no government regulators to stifle our fledgling efforts, and it was a golden age, but we didn’t notice it. We learned a lot of organic chemistry.
By the time I left Georgia Tech for graduate school in biochemistry at the University of California at Berkeley, the genetic code had been solved. DNA did not yet interest me. I was excited by molecules. DNA before PCR was long and stringy, not really molecular at all. Six years in the biochemistry department didn’t change my mind about DNA, but six years of Berkeley changed my mind about almost everything else.
I was in the laboratory of Joe Neilands who provided his graduate students with a place to work and very few rules. I’m not even sure that Joe knew any rules except the high moral ground of social responsibility and tolerance. Not knowing that the department did have rules, I took astrophysics courses instead of molecular biology, which I figured I could learn from my molecular biologist friends. I published my first scientific paper in Nature, in 1968. It was a sophomoric astrophysical hypothesis called “The Cosmological Significance of Time Reversal.” I think Nature is still embarrassed about publishing it, but it was immensely useful to me when it came time for my qualifying examination. The committee would decide whether or not I would be allowed to take a Ph. D, without having taken molecular biology. And my paper in Nature, helped them to justify a “yes.” In retrospect, the membership of that committee is intriguing.
Don Glaser, who received this Prize in physics in 1960 at age 34, would later be one of the founders of Cetus Corporation, where I was working when I invented PCR. Henry Rapaport, who discovered psoralens would be the scientific advisor to my department at Cetus, and would co-author two patents with me. Alan Wilson, now sadly passed away, would be the first researcher outside of Cetus to employ PCR. And Dan Koshland would be the editor of Science when my first PCR paper was rejected from that journal and also the editor when PCR was three years later proclaimed Molecule of the Year. I passed. None of us, I think, as we walked out of that room, had any conscious inkling of the way things would turn out among us.
In Berkeley it was a time of social upheaval and Joe Neilands was the perfect mentor to see his people through it with grace. We laughed a lot over tea at four every afternoon around a teakwood table that Joe had brought from home and oiled once a month. Our lab had an ambience that was special. I decided to become a neurochemist. Joe was the master of microbial iron transport molecules. It wasn’t done like that in most labs, where the head of the lab would prefer that you help advance his career by elaborating on some of his work. Not so with Neilands. As long as I wrote a thesis and got a degree, he didn’t care what else I did, and I stayed in his lab happily, following my own curiosity even if it carried me into music courses, for as long as Joe thought we could get away with it. The department was paying me a monthly stipend from the NIH, and eventually, Joe knew, I would have to leave.
After six years I headed east with a Ph. D. and confidence in my education. My wife of a few months went to Kansas to go to medical school and I followed her there. That was 1972.
I had made no professional plans that would work in Kansas, so I decided to become a writer. I discovered pretty quickly that I was far too young. I didn’t know anything yet about tragedy, and my characters were flat. I didn’t know how to describe a mean spririt in terms someone else could believe.
So I had to get a job as a scientist. I found one at the medical school working with two pediatric cardiologists and a pathologist. It was a very fortunate accident. For one thing pediatricians are always the nicest doctors, and for another thing these doctors were very special: Leone Mattioli, whose wife could cook, Agostino Molteni and Richard Zakheim. For two years I did medical research, learned how to appreciate Old World values from two Italians and a New York Jew, and learned human biology for the first time.
Marriage over, I returned to Berkeley, working for a time in a restaurant and then at the University of California at San Francisco killing rats for their brains. I saw Max Delbrück talk, but I don’t think I understood the significance of who he was, nor was I influenced to go into molecular biology by him. I was working on the enkephalins.
But then there was a seminar describing the synthesis and cloning of a gene for somatostatin. That impressed me. For the first time I realized that significant pieces of DNA could be synthesized chemically and that they were likely to be very exciting. I started studying DNA synthesis in the library. And I started looking for a job making DNA molecules.
Cetus hired me in the fall of 1979. I worked long hours and enjoyed it immensely. DNA synthesis was much more fun than killing rats, and the San Francisco Bay Area was a good place to be doing it. There were a number of biotechnology companies and several academic groups working on improving the synthesis methods for DNA. Within two years, there was a machine in my lab from Biosearch of San Rafael, California, turning out oligonucleotides much faster than the molecular biologists at Cetus could use them. I started playing with the oligonucleotides to find out what they could do.
The lab next door to me was run by Henry Erlich and was working on methods for detecting point mutations. We had made a number of oligonucleotides for them. I started thinking about their problem and proposed an idea of my own which they ended up calling oligomer restriction. It worked as long as the target sequence was fairly concentrated, like a site on a purified plasmid, but it didn’t work if the site was relatively rare, like a single copy gene in human DNA.
I apologize to those of you who just got lost, but I do have to say a few things now that are going to be difficult. I will get back to the story in a few minutes.
The oligomer restriction method also relied on the fact that the target of interest contained a restriction site polymorphism, which kept it from being universally applicable to just any point mutation. I started thinking about doing some experiments wherein an oligonucleotide hybridized to a specific site could be extended by DNA polymerase in the presence of only dideoxynucleoside triphosphates. I reasoned that if one of the dideoxynucleoside triphosphates in each of four aliquots of a reaction was radioactive then a analysis of the aliquots on a gel could indicate which of the dideozynucleoside triphosphates had added to the hybridized oligonucleotide and therefore which base was adjacent to the three prime end of the oligonucleotide. It would be like doing Sanger sequencing at a single base pair.
On human DNA, it would not have worked because the oligonucleotide would not have specifically bound to a single site. On a DNA as complex as human DNA it would have bound to hundreds or thousands of sites depending on the sequence involved and the conditions used. What I needed to make this work was some method of raising the relative concentration of the specific site of interest. What I needed was PCR, but I had not considered that possibility. I knew the difference numerically between five thousand base pairs as in a plasmid and three billion base pairs as in the human genome, but somehow it didn’t strike me as sharply as it should have. My ignorance served me well. I kept on thinking about my experiment without realizing that it would never work. And it turned into PCR.
One Friday night I was driving, as was my custom, from Berkeley up to Mendocino where I had a cabin far away from everything off in the woods. My girlfriend, Jennifer Barnett, was asleep. I was thinking. Since oligonucleotides were not that hard to make anymore, wouldn’t it be simple enough to put two of them into the reaction instead of only one such that one of them would bind to the upper strand and the other to the lower strand with their three prime ends adjacent to the opposing bases of the base pair in question. If one were made longer than the other then their single base extension products could be separated on a gel from each other and one could act as a control for the other. I was going to have to separate them on a gel anyway from the large excess of radioactive nucleosidetriphosphate. What I would hope to see is that one of them would pick up one radioactive nucleotide and the other would pick up its complement. Other combinations would indicate that something had gone wrong. It was not a perfect control, but it would not require a lot of effort. It was about to lead me to PCR.
I liked the idea of a control that was nearly free in terms of cost and effort. And also, it would help use up the oligonucleotides that my lab could now make faster than they could be used.
As I drove through the mountains that night, the stalks of the California buckeyes heavily in blossom leaned over into the road. The air was moist and cool and filled with their heady aroma.
Encouraged by my progress on the thought experiment I continued to think about it and about things that could possibly go wrong. What if there were deoxynucleoside triphosphates in the DNA sample, for instance? What would happen? What would happen, I reasoned, is that one or more of them would be added to the oligonucleotide by the polymerase prior to the termination of chain elongation by addition of the dideoxynucleoside triphosphate, and it could easily be the wrong dideoxynucleoside triphosphate and it surely would result in an extension product that would be the wrong size, and the results would be spurious. It would not do. I needed a way to insure that the sample was free from contamination from deoxynucleoside triphosphates. I could treat the sample before the extension reaction with bacterial alkaline phosphatase. The enzyme would degrade any triphosphates present down to nucleosides which would not interfere with the main reaction, but then I would need to “deactivate the phosphatase before adding the dideoxynucleoside triphosphates and everyone knew at that time that BAP, as we called it, was not irreversibly denaturable by heat. The reason we knew this was that the renaturation of heat denatured BAP had been demonstrated in classic experiments that had shown that a protein’s shape was dictated by it’s sequence. In the classical experiments the renaturation had been performed in a buffer containing lots of zinc. What had not occurred to me or apparently many others was that BAP could be irreversibly denatured if zinc was omitted from the buffer, and that zinc was not necessary in the buffer if the enzyme was only going to be used for a short time and had its own tightly bound zinc to begin with. There was a product on the market at the time called matBAP wherein the enzyme was attached to an insoluble matrix which could be filtered out of a solution after it had been used. The product sold because people were of the impression that you could not irreversibly denature BAP. We’d all heard about, but not read, the classic papers.
This says something about the arbitrary way that many scientific facts get established, but for this story, it’s only importance is that, had I known then that BAP could be heat denatured irreversibly, I may have missed PCR. As it was, I decided against using BAP, and tried to think of another way to get rid of deoxynucleoside triphosphates. How about this, I thought? What if I leave out the radioactive dideoxynucleoside triphosphates, mix the DNA sample with the oligonucleotides, drop in the polymerase and wait? The polymerase should use up all the deoxynucleoside triphosphates by adding them to the hybridized oligonucleotides. After this was complete I could heat the mixture, causing the extended oligonucleotides to be removed from the target, then cool the mixture allowing new, unextended oligonucleotides to hybridize. The extended oligonucleotides would be far outnum- bered by the vast excess of unextended oligonucleotides and therefore would not rehybridize to the target to any great extent. Then I would add the dideoxynucleoside triphosphate mixtures, and another aliquot of polymerase. And now things would work.
But what if the oligonucleotides in the original extension reaction had been extended so far they could now hybridize to unextended oligonucleotides of the opposite polarity in this second round. The sequence which they had been extended into would permit that. What would happen?
EUREKA!!!! The result would be exactly the same only the signal strength would be doubled.
EUREKA again!!!! I could do it intentionally, adding my own deoxynucleoside triphosphates, which were quite soluble in water and legal in California.
And again, EUREKA!!!! I could do it over and over again. Every time I did it I would double the signal. For those of you who got lost, we’re back! I stopped the car at mile marker 46,7 on Highway 128. In the glove compartment I found some paper and a pen. I confirmed that two to the tenth power was about a thousand and that two to the twentieth power was about a million, and that two to the thirtieth power was around a billion, close to the number of base pairs in the human genome. Once I had cycled this reaction thirty times I would be able to the sequence of a sample with an immense signal and almost no background.
Jennifer wanted to get moving. I drove on down the road. In about a mile it occurred to me that the oligonucleotides could be placed at some arbitrary distance from each other, not just flanking a base pair and that I could make an arbitrarily large number of copies of any sequence I chose and what’s more, most of the copies after a few cycles would be the same size. That size would be up to me. They would look like restriction fragments on a gel. I stopped the car again.
“Dear Thor!,” I exclaimed. I had solved the most annoying problems in DNA chemistry in a single lightening bolt. Abundance and distinction. With two oligonucleotides, DNA polymerase, and the four nucleosidetriphosphates I could make as much of a DNA sequence as I wanted and I could make it on a fragment of a specific size that I could distinguish easily. Somehow, I thought, it had to be an illusion. Otherwise it would change DNA chemistry forever. Otherwise it would make me famous. It was too easy. Someone else would have done it and I would surely have heard of it. We would be doing it all the time. What was I failing to see? “Jennifer, wake up. I’ve thought of something incredible.”
She wouldn’t wake up. I had thought of incredible things before that somehow lost some of their sheen in the light of day. This one could wait till morning. But I didn’t sleep that night. We got to my cabin and I starting drawing little diagrams on every horizontal surface that would take pen, pencil or crayon until dawn, when with the aid of a last bottle of good Mendocino county cabernet, I settled into a perplexed semiconsciousness.
Afternoon came, including new bottles of celebratory red fluids from Jack’s Valley Store, but I was still puzzled, alternating between being absolutely pleased with my good luck and clever brain, and being mildly annoyed at myself and Jennifer Barnett, for not seeing the flaw that must have been there. I had no phone at the cabin and there were no other biochemists besides Jennifer and me in Anderson Valley. The conundrum which lingered throughout the week-end and created an unprecedented desire in me to return to work early was compelling. If the cyclic reactions which by now were symbolized in various ways all over the cabin really worked, why had I never heard of them being used? If they had been used, I surely would have heard about it and so would everybody else including Jennifer, who was presently sunning herself by the pond taking no interest in the explosions that were rocking my brain.
Why wouldn’t these reactions work?
Monday morning I was in the library. The moment of truth. By afternoon it was clear. For whatever reasons, there was nothing in the abstracted literature about succeeding or failing to amplify DNA by the repeated reciprocal extension of two primers hybridized to the separate strands of a particular DNA sequence. By the end of the week I had talked to enough molecular biologists to know that I wasn’t missing anything really obvious. No one could recall such a process ever being tried.
However, shocking to me, not one of my friends or colleagues would get excited over the potential for such a process. True. I was always having wild ideas, and this one maybe looked no different than last week’s. But it WAS different. There was not a single unknown in the scheme. Every step involved had been done already. Everyone agreed that you could extend a primer on a DNA template, everyone knew you could melt double stranded DNA. Everyone agreed that what you could do once, you could do again. Most people didn’t like to do things over and over, me in particular. If I had to do a calculation twice, I preferred to write a program instead. But no one thought it was impossible. It could be done, and there was always automation. The result on paper was so obviously fantastic, that even I had little irrational lapses of faith that it would really work in a tube, and most everyone who could take a moment to talk about it with me, felt compelled to come up with some reason why it wouldn’t work. It was not easy in that post-cloning, pre-PCR year to accept the fact that you could have all the DNA you wanted. And that it would be easy.
I had a directory full of untested ideas in the computer. I opened a new file and named this one polymerase chain reaction. I didn’t immediately try an experiment, but all summer I kept talking to people in and out of the company. I described the concept around August at an in-house seminar. Every Cetus scientist had to give a talk twice a year. But no one had to listen. Most of the talks were dry descriptions of labor performed and most of the scientists left early without comment.
One or two technicians were interested, and on the days when she still loved me, Jennifer, thought it might work. On the increasingly numerous days when she hated me, my ideas and I suffered her scorn together.
I continued to talk about it, and by late summer had a plan to amplify a 400-bp fragment from Human Nerve Growth Factor, which Genentech had cloned and published in Nature. I would start from whole human placental DNA from Sigma. taking a chance that the cDNA sequence had derived from a single exon. No need for a cDNA library. No colonies, no nothing. It would be dramatic. I would shoot for the moon. Primers were easy to come by in my lab, which made oligonucleotides for the whole company. I entered the sequences I wanted into the computer and moved them to the front of the waiting list.
My friend Ron Cook, who had founded Biosearch, and produced the first successful commercial DNA synthesis machine, was the only person I remember during that summer who shared my enthusiasm for the reaction. He knew it would be good for the oligonucleotide business. Maybe that’s why he believed it. Or maybe he’s a rational chemist with an intact brain. He’s one of my best friends now, so I have to disqualify myself from claiming any really objective judgement regarding him. Perhaps I should have followed his advice, but then things would have worked out differently and I probably wouldn’t be here on the beach in La Jolla writing this, which I enjoy. Maybe I would be rich in Tahiti. He suggested one night at his house that since no one at Cetus had taken it seriously, I should resign my job, wait a little while, make it work, write a patent, and get rich. By rich he wasn’t imagining $300000000. Maybe one or two. The famous chemist Albert Hofmann was at Ron’s that night. He had invented LSD in 1943. At the time he didn’t realize what he had done. It only dawned on him slowly, and then things worked their way out over the years like no one would have ever predicted, or could have controlled by forethought and reason.
I responded weakly to Ron’s suggestion. I had already described the idea at Cetus, and if it turned out to be commercially successful they would have lawyers after me forever. Ron was not sure that Cetus had rights on my ideas unless they were directly related to my duties. I wasn’t sure about the law, but I was pretty happy working at Cetus and assumed innocently that if the reaction worked big time I would be amply rewarded by my employer.
The subject of PCR was not yet party conversation, even among biochemists, and it quickly dropped. Albert being there was much more interesting, even to me. He had given a fine talk that afternoon at Biosearch.
Anyhow, my problems with Jennifer were not getting any better. That night was no exception to the trend. I drove home alone feeling sad and unsettled, not in the mood for leaving my job, or any big change in what was left of stability in my life. PCR seemed distant and very small compared to our very empty house.
In September I did my first experiment. I like to try the easiest possibilities first. So one night I put human DNA and the nerve growth factor primers in a little screw-cap tube with an O-ring and a purple top. I boiled for a few minutes, cooled, added about 10 units of DNA polymerase, closed the tube and left it at 37°. It was exactly midnight on the ninth of September. I poured a cold Becks into a 400-ml beaker and contemplated my notebook for a few minutes before leaving the lab.
Driving home I figured that the primers would be extended right away, and I hoped that at some finite rate the extension products would come unwound from their templates, be primed and re-copied, and so forth. I did not relish the idea of heating, cooling, adding polymerase over and over again, and held this for a last resort method of accomplishing the chain reaction. I was thinking of DNA:DNA interactions as being reversible with all the ramifications thereof. I wasn’t concerned about the absolute rate of dissociation, because I didn’t care how long the reaction took as long as nobody had to do anything. I assumed there would always be some finite concentration of single strands, which would be available for priming by a relatively high concentration of primer with pseudo-first order kinetics.
For a reaction with the potential which I dreamed of for this one, especially in light of the absence of anything else that could do the same thing, time was only a very secondary consideration. Would it work at all was important. The next most important thing was, would it be easy to do? Then came time.
At noon the next day I went to the lab to take a 12-hour sample. There was no sign by ethidium bromide of any 400-bp bands. I could have waited another hundred years as I had no idea what the absolute rates might be. But I succumbed slowly to the notion that I couldn’t escape much longer the unpleasant prospect of cycling the reaction between single stranded temperatures and double stranded temperatures. This also meant adding the thermally unstable polymerase after every cycle.
For three months I did sporadic experiments while my life at home and in the lab with Jennifer was crumbling. It was slow going. Finally, I retreated from the idea of starting with human DNA, I wasn’t even absolutely sure that the Genentech sequence from Nature that I was using was from a single exon. I settled on a target of more modest proportions, a short fragment from pBR322, a purified plasmid. The first successful experiment happened on December 16th. I remember the date. It was the birthday of Cynthia, my former wife from Kansas City, who had encouraged me to write fiction and bore us two fine sons. I had strayed from Cynthia eventually to spend two tumultuous years with Jennifer. When I was sad for any other reason, I would also grieve for Cynthia. There is a general place in your brain, I think, reserved for “melancholy of relationships past.” It grows and prospers as life progresses, forcing you finally, against your grain, to listen to country music.
And now as December threatened Christmas, Jennifer, that crazy, wonderful woman chemist, had dramatically left our house, the lab, headed to New York and her mother, for reasons that seemed to have everything to do with me but which I couldn’t fathom. I was beginning to learn tragedy. It differs a great deal from pathos, which you can learn from books. Tragedy is personal. It would add strength to my character and depth someday to my writing. Just right then, I would have preferred a warm friend to cook with. Hold the tragedy lessons. December is a rotten month to be studying your love life from a distance.
I celebrated my victory with Fred Faloona, a young mathematician and a wizard of many talents whom I had hired as a technician. Fred had helped me that afternoon set up this first successful PCR reaction, and I stopped by his house on the way home. As he had learned all the biochemistry he knew directly from me he wasn’t certain whether or not to believe me when I informed him that we had just changed the rules in molecular biology. “Okay, Doc, if you say so.” He knew I was more concerned with my life than with those cute little purple-topped tubes.
In Berkeley it drizzles in the winter. Avocados ripen at odd times and the tree in Fred’s front yard was wet and sagging from a load of fruit. I was sagging as I walked out to my little silver Honda Civic, which never failed to start. Neither Fred, empty Becks bottles, nor the sweet smell of the dawn of the age of PCR could replace Jenny. I was lonesome.
From Nobel Lectures, Chemistry 1991-1995, Editor Bo G. Malmström, World Scientific Publishing Co., Singapore, 1997

A person kept a treasure in a village; but, he went to war and lost his knowledge of the path to the village.

He did not, however, forget his treasure upon seeing Love and set on a journey to discover it. After many hard years of travel through the sandy winds of the desert and the rainy nights of the rainforest, he came upon a cliff covered in ferns. Below, he saw a village.

The recognition of the village at once became immediate, visceral and vague. He knew that his treasure lay in that particular village at the base of the cliff, but he did not know why he knew. He only knew.

For many days and many nights, he encamped on the crest of the cliff thinking and analyzing how to descent. From many hopeful signs and enthusiastic configurations in his mind, he only in the end found despair. No path down the cliff left him alive as he tested the contortions and bold leaps his imagination. The ferns blocked his descent in every regard. Descent meant death.

After the full moon came and went twice, he resolved to stop trying and simply rest his eyes on the village and enjoy the thought of the treasure in his own mind’s eye. What more could he wish for? He in the end found his village even if he could not touch it.

For a month he sat in a meditative bliss on the crest of the cliff, soaking in the rhythms of nature surrounding him and not looking down but rather transporting up, in a sense. But then it began to rain and his contentment washed away with the rainwater into the water table of the village.

His sadness overcame him. Even as the rain one last evening ceased, sadness overtook his soul. As the third full moon rose in the clear night sky, he followed its reflective rays to a hole in the canopy of the bamboo forest on his left. He had never noticed the hole in the canopy before. Not knowing what else to do, he walked through and away from his treasure, forever.

Or so he thought. Each day he traveled further and further away from his gilded village as the kite hawk soars in the sky. But not being a kite hawk and simply a man, he began to feel a sense of approach. To what he did not know.

We are the love that heals the land
We are the bliss that heals the sea
We are the brilliance that heals the sky

We are the music makers
We are the dreamers of dreams
We are the doers of deeds
We are the givers of gifts

To make a peaceful heaven,
on earth; think eunoia

“beautiful thinking”