Version 1.1
(December 10, 1997)
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
This paper attempts to outline the case for believing that we will have superhuman artificial intelligence within the first third of the next century.
Definition of "superintelligence"
By a "superintelligence" I mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, a number of interconnected computers, cultured cortical tissue or what have you.
Moore's law and present supercomputers
Moore's law states <A> that processor speed doubles every eighteen months. The doubling time used to be two years, but that changed about fifteen years ago. The most recent data points indicate a doubling time as short as twelve months. This would mean that there will be a thousand-fold increase in computational power in ten years.
Moore's law is what chip manufacturers rely on when they decide what sort of chip to develop. If we estimate the computational capacity of the human brain, and allow ourselves to extrapolate available processor speed according to Moore's law (whether doing so is permissible will be discussed presently), we can calculate how long it will take before computers have sufficient raw power to match a human intelligence.
The fastest supercomputer we have today (or will have in December 1997) is 1.5 Terraops, 1.5*10^12 ops. There is a project that aims to extract 10 Terraops from the Internet by having a hundred thousand volunteers install a screen saver on their computer that would allow a central computer to delegate some computations to them. This (so-called metacomputing) approach works best for tasks that are very easy to parallelize, such as doing an exhaustive journey though search space in the attempt of breaking a code. With better bandwidth connections in the future (e.g. optical fibers), large-scale metacomputing will work even better than today. Brain simulations should by their nature be relatively easy to parallelize, so maybe huge brain simulations distributed over the Internet could be a feasible alternative in the future. We shall, however, disregard this possibility for present purposes, and regard the 1.5 Tops machine as the best we can do at the moment. The potential of metacomputing can be factored in into our prognosis by viewing it as an additional reason to believe that available computing power will continue to grow as Moore's law predicts.
Even without any technology approvement we can do somewhat better than, for example by doubling the number of chips in the box. A 3 Tops computer has been ordered by the US government to be used in testing and developing the nations stock pile of nuclear weapons. However, considering that the cost of this machine is $94.000.000, it is clear that even massive extra funding would only yield a very modest increase in computing power in the short term.
How good grounds are there to believe that Moore's law will continue to hold in the future? It is clear that sooner or later it must fail. There are physical limitations on the density with which matter can store and process information. The Bekenstein bound gives an upper bound on the amount of information that can be contained within a given volume using a given amount of energy. Since space colonization would allow at most a polynomial (~t^3) expansion rate (assuming the maximal speed is bounded by the speed of light), the exponential increase of available computational power be continued indefinitely, unless new physics is forthcoming.
In my opinion, Moore's law loses its credibility long before we reach absolute physical limits. I believe that it has very little credence beyond, say, the next fifteen years. That is not to say that processor speed will not continue to double every twelve or eighteen months after 2012, only that we can not use Moore's law to argue that it will. Instead, if we want to make predictions beyond that date, we will have to look directly at what is physically feasible. That will presumably also mean that we have to contend ourselves with a greater uncertainty interval along the time axis. Physical feasibility studies tell us, at most, what will happen given that people want it to happen, but even if we assume that the demand is there, it will still not tell us when it will happen.
In about the year 2007 we will have reached the physical limit of present silicon technology. Moore's law, however, has survived several technological phase transitions before; there is no reason why two-dimensional silicon wafers should be the last word in chip technology. Several ways to overcome the limits of the present technology have been proposed and are being developed.
In the near future, it might for example be possible to use phase shift masks to press the minimum circuit-line width on a microchip down to as little as 0.13 micrometer, even while remaining in the optical range with the lithographic irradiation. Leaving the optical range, we could use x-rays or at least extreme ultraviolet ("EUV" or "soft x-rays") to attain finer precision. Failing this, it should be feasible to use electron bean writing, although this production method would be slow and hence expensive. A compromise would be to write some of the bottleneck gates with an electron beam, where speed is absolutely crucial, and use optical or EUV to write the other elements of the chip. We can also increase the power of a chip by using more layers, a technique that has only recently been mastered, and by making bigger wafers (up to 300 mm should not be a problem). Drastically bigger chips could be manufactured if there were some error tolerance. Tolerance to error could be obtained by using evolvable hardware (DeGaris). It is also possible to push the physical limits on how small the transistors can be made by switching to new materials, such as Gallium Arsenide. Quantum transistors are presently being developed, promising a major step forward circuitry where high switching speed or low energy consumption is essential. Because of the highly parallel nature of brain-like computations, it should also be possible to use a highly parallel architecture, in which case it will suffice to produce a great number of moderately fast processors, and have then have them connected in high-bandwidth local-area networks. We have already mentioned the possibility of metacomputing. <B>
These are all things that are being developed today. Massive funding is pumped into these technologies <C>. Although to a person working in the field, who is constantly focused on the immediate problems, the difficulties can appear staggering, it is still fair to say that there is a widespread optimism among the experts about the prospects for computers to continue to grow more powerful for the foreseeable future.
<A> In fact, it is not clear what, exactly, Moore's law says. The law derives its name from Gordon Moore,co-founder of Intel Corp., who back in 1965 noted that microchips were doubling in circuit density every year. In 1975 he made the prediction that from then on, the doubling time would be two years. The actual doubling time has fluctuated a bit, starting at one year, went up to two years, and is now back to approximately one year again. So one ambiguity in citing Moore's law is that it is unclear whether the time constant is supposed to be one year, two years, or whether it is supposed to be whatever the most recent data points indicate. A second ambiguity resides in the fact that the initial statement was phrased in terms of the number on transistors that could be fitted into an area unit, rather than the speed of the resulting chip. Until now, this distinction hasn't mattered much, because circuitry density and speed have been highly correlated and have grown at the same rate. When we look to the future, however, it is possible that we will achieve increased computing power by other means than by making transistors smaller. It therefore makes sense to reformulate Moore's law into a statement asserting an exponential growth in computing power (per inflation-adjusted dollar). I think it is better to apply the label "Moore's law" to this slightly modified hypothesis than to invent a new term for what is basically the same idea.
<B> In the longer term, we also have to consider nanotechnology and quantum computing.
<C> It nowadays takes about 400 engineers to produce a new chip. A modern chip factory may cost over $2 billion. About $20 to $30 billion is spent on microchip R&D every years. These figures have grown over the years, so it should be pointed out that one factor that could slow the pace of development would be if the funding begins to level out, as sooner or later it will.
Hardware requirements
The human brain contains about 10^11 neurons. Each neuron has about 5*10^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz; each signal contains, say, 5 bits. This equals 10^17 ops. <A>
The true value cannot be much higher than this, but it might be much lower. There seems to be great redundancy in the brain; synchronous firing of large pools of neurons is often required if the signal is not to drown in the general noise. An alternative way of calculating the total capacity is to consider some part of the cortex that performs a function that we know how to replicate on digital computers. We calculate the average computer-equivalent processing capacity of a single neuron in that cortical area, and multiply this value with the number of neurons in the brain. Hans Moravec has done this calculation using data about the human retina (Moravec 1997), and compare it with known facts about the computational demands of edge extraction in robot vision, he got the value 10^14ops for the human brain as a whole. That is three orders of magnitude less than the upper bound calculated by assuming that there is no redundancy.
I see no reason to suppose that the redundancy in the retina should be any greater than in the cortex. If anything, one would rather expect it to be the other way around, since edge extraction is easier than higher cognitive processes and therefore presumably more optimised (by evolution and individual learning).
If we need 100 Tops to simulate the human brain then the required computational power will be reached sometime between 2004 and 2008, depending on whether the doubling time is 12 or 18 months. This would be the best experimental supercomputers in the world, not necessarily the computers available to AI researchers. Depending on how much funding is forthcoming, it might take an additional decade before researchers experimenting with general intelligence machines have access to machines with this capacity.
This is if we take the retina simulation as a model. As the present, however, not enough is known about the neocortex to allow us to simulate it in such a way, but the knowledge might well be available by 2004 to 2008 (as we shall see in the next section). What is required, if we are to get human-level AI with this lower bound on the hardware power, is the ability to simulate 1000-neuron aggregates in a highly efficient way.
The extreme alternative, which is what we assumed in the derivation of the upper bound, is to simulate each neuron individually. The amount of computational capacity that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in a simulation that replicates the performance of the whole. It seems like a good bet though, at least to me, that the nodes could be strongly simplified and replaced with simple standardised elements. It appears perfectly feasible to have an intelligent neural network with any of a large variety of neuronal output functions and time delays.
It does look plausible, however, that when we can simulate an idealised neuron and know enough about the connection matrix to put the nodes together in a way that functionally mirrors how it is done in the brain, then we will also be able to replace groups of thousands of these by something that requires less than the computational power that it takes to simulate each neuron individually. We might well get all the way down to a mere 1000 instructions per neuron and second, as is assumed in Moravec's estimate. But unless we can build these modules without first building a whole brain then this augmentation will only be possible after we already have human-equivalent artificial intelligence.
If we assume the upper bound on the computational power needed to simulate the human brain, i.e. if we assume enough power to simulate each neuron individually (10^17 ops), then Moore's law says that we will have to wait until about 2015 or 2024 (for doubling times of 12 and 18 months, respectively) before supercomputers with the requisite performance are at hand. But if by then we know how to do the simulation on the level of individual neurons, we will presumably also have figured out how to make at least some optimizations, so we could probably adjust these upper bounds a bit downwards.
So far I have been talking only of processor speed, but computers also need a great deal of memory if they are to replicate the brain's performance. Throughout the history of computers, the ratio between memory and speed has remained more or less constant at about 1 byte/ops. Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate), it seems that speed rather than memory would be the bottleneck in brain simulations on the neuronal level. If we instead assume that we can achieve a thousand-fold leverage in our simulation speed as was indicated by the comparison of the retina with robot vision, then that would bring the requirement of speed down, perhaps, one order of magnitude below the memory requirement. But if we can optimize away three orders of magnitude on speed by simulating 1000-neuron aggregates, we will probably be able to cut away one order of magnitude of the memory requirement as well. Thus the difficulty of building enough memory may be significantly smaller, and is almost certainly not significantly greater, than the difficulty of building a processor that is fast enough. Thus we can focus on the speed requirement as the bottleneck on the hardware.
I have ignored the possibility that quantum phenomena are irreducibly involved in human cognition. Hameroff and Penrose and others have suggested that coherent quantum states may exist in the microtubules, and that the brain utilises these phenomena to perform high-level cognitive feats. In my and many others' opinion, this is highly implausible. I won't try to motivate this judgement here.
In conclusion we can say that the hardware capacity for human-equivalent artificial intelligence will likely exist before the end of the first quater of the next century, and may be reached as early as 2004. A corresponding capacity should be available to leading AI labs within ten years thereafter, or sooner if the potential of human-level AI and superintelligence is by then understood by funding agencies.
<A> It is possible to nit-pick on this estimate. For example, there is some evidence that some limited amount of communication between nerve cells is possible without synaptic transmission. And then we have the regulatory mechanisms consisting neurotransmitters and their sources, receptors and re-uptake channels. While neurotransmitter balances are very important for the proper functioning of the human brain, they have an insignificant information content compared to the synaptic structure. Perhaps a more serious point is that that neurons often have rather complex time integration properties. Whether a specific set of synaptic inputs results in the firing of a neuron depends on their exact timing. My opinion is that except possibly for a small number of special applications such as stereo audition, the temporal properties of the neurons can easily be accommodated with a time resolution of the simulation on the order of 1 ms. In an un-optimized simulation this would add an order of magnitude to the estimate given above, where we assumed a resolution of 10 ms, corresponding to a firing rate of 100 Hz. However, the other values cited are rather too high than too low, I think, so I don't believe we should change the estimate much to allow for possible fine-grained time integration effects in a neuron's synaptic tree. If someone thinks otherwise, he can add an order of magnitude and add three years to the predicted upper bound of the time when human-equivalent hardware arrives (but not to the lower bound based on Moravec's calculation).
Software via the bottom-up approach
So much for the hardware. If we want superintelligence, we will also need to develop appropriate software. There are several approaches to this, varying in the amount of top-down direction they require. At one end of the spectrum we have systems like CYC, which is a very large multi-contextual knowledge base and inference engine. For more than a decade it has been spoon-fed facts, rules of thumb, and heuristics by a team of knowledge enterers. If this were the only possible way forward then the position of those who believe that strong AI will never happen would not be absurd. But there are other ways.
Given sufficient hardware, and the right architecture, we could program the artilect the same way as we program a child, i.e. by letting it interact with the external world and by educating it. There are well known methods, such as the Backpropagation algorithm, that can achieve good results in many small applications involving multi-layer neural networks. Unfortunately this algorithm doesn't scale well. The Hebbian learning rule, on the other hand, is perfectly scaleable (it scales linearly, since each weight update only involves the activity of two nodes, independently of the size of the network). It is known to be a major mode of learning in the brain. However, if the Hebbian learning rule is to achieve all that humans can do, then a great deal of sophisticated neuronal architecture must presumably by presupposed. In biological organisms, this architecture would be genetically coded. At its present stage, neuroscience cannot tell us what this architecture is. It is not known, for example, how purely Hebbian learning can allow the brain to store structured representations in long-term memory (though see Bostrom 1996). So we have to await advances in neuroscience before we can construct a human-level (or even higher animal-level) artificial intelligence by means of this radically bottom-up approach. While it is true that neuroscience has advanced very rapidly in recent years, it is difficult to estimate how long it will take before we know enough about the brain's architecture and its learning algorithms to be able to replicate these in a computer of sufficient computational capacity. A wild guess: fifteen years. Note that this is not a prediction about how long it will take until we have a complete understanding of all the major features of the brain; the 15 years figure refers to the time when we might be expected to know enough about the basic principles of how the brain works to be able to begin to implement these computational paradigms on a computer, without necessarily modelling the brain in any biologically realistic way.
This estimate might seem to some to underestimate the difficulties, and perhaps it does. But consider how much has happened during the past 15 years. The whole discipline of computational neuroscience did hardly exist back in 1982. And future progress will occur not only because of research continues as hitherto, but also because new experimental devices become available. Large-scale multi-electrode recordings should be feasible within the near future. Computer/neuro interfaces are in development. More powerful hardware is being made available to neuroscientists to do computation-intensive simulations. Neuropharmachologists refine drugs with higher specificity, allowing researches to manipulate the levels of any given neurotransmitter. Scanning techniques are improved. The list goes on. All these innovations give neuroscientists new tools with which to probe the mysteries of the mammalian brain.
The information that goes into producing the output of the human brain comes partly from our genes and partly from sensory input.
The sensory input signals could easily be provided to an artificial intelligence as well, by using video cameras, microphones, and tactile sensors. It would be easy to supply all of these input channels, although any one of them would probably suffice. No one sensory modality is essential for human-level intelligence; we know that from humans who are born blind or deaf and still are not mentally handicapped. The interactive element present in human children could also be arranged, by means of robot limbs and a speaker.
So there should be no problem supplying the AI with the sensory input and interaction possibilities of a human child. The genetic information is less trivial. We hope that neuroscience will soon be in a position to tell us enough about the principles underlying human cognition to use similar principles to program human-equivalent hardware. I have argued that there is reason for optimism here; the knowledge might be available well within ten, fifteen years. We shall now consider three further arguments for this view.
One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain's ability to reach adult performance-levels depended on an very large amount of genetic hardwiring, -- if each function depended on a unique and hopelessly complicated inborn architecture, discovered over the aeons in the evolutionary learning process of our species. However, there are some considerations that indicate that this is not the case. I will give a very sketchy outline of these here. For a somewhat more extensive overview, see Phillips & Singer (1996).
First, consider the plasticity of the neocortex, especially in infants. It is known that cortical lesions, even sizeable ones, can often be compensated for if they occur at an early age, by other areas which take over the functions that would normally have been developed in the destroyed region. In one study, for example, sensitivity to visual features was developed in the auditory cortex of neonatal ferrets, after its normal auditory input had been replaced with visual projections (Sur et al. 1988). Similarly, it has been shown that the visual cortex can take over functions normally served by the somatosensory cortex (Schlaggar & O'Leary 1991). A recent article in Nature (Cohen et al. 1997) described an experiment that showed that people who have been blind since an early age can use their visual cortex to process tactile stimulation when they read Braille. It is true that there are some more primitive regions in the deeper regions of the brain whose functions cannot be taken over by any other area. For example, people who have their hippocampus removed lose the ability to learn new episodic or semantic facts. But it looks as if the neocortex, where most of the processing is done that makes us intellectually superior to other animals, is highly plastic. It would be interesting to examine in more detail to what extent this holds for all higher-level cognitive abilities in humans. Are there small neocortical regions such that, if excised at birth, the subject will never obtain certain high-level competencies, not even to a limited degree?
Second, we have the fact that as far as we know, the neocortical architecture in humans, and especially in infants, is remarkably homogeneous over different cortical regions and even over species:
Laminations and vertical connections between lamina are hallmarks of all cortical systems, the morphological and physiological characteristics of cortical neurons are equivalent in different species, as are the kinds of synaptic interactions involving cortical neurons. This similarity in the organization of the cerebral cortex extends even to the specific details of cortical circuitry. (White 1989, p. 179).
Third, an evolutionary argument can be devised from the fact that the growth of neocortex that allowed Homo Sapiens to intellectually outclass other animals took place under a relatively brief period of time. This means that evolutionary learning cannot have embedded very much information in these additional cortical structures that give us our intellectual edge, but they must rather be the result of changes in a limited number of genes, regulating a limited number of cortical parameters.
Note that none of these three considerations is an argument against modularization of adult human brains. They only indicate that a considerable part of the information that goes into the modularization results from self-organization and perceptual input, rather than from an immensely complicated genetic lookup table. So these considerations all support the view that the amount of neuroscientific information that is needed for the bottom-up approach to succeed is fairly limited.
Why the past failure of AI is no argument against its future success
The AI field showed a stagnation in the seventies and eighties as the expectations from the early days failed to materialize and as when progress slowed down. Some people take this to show that the AI ideal is dead and that we will never have superintelligence, or at least not for thousands of years. I think these people draw the wrong lesson from this episode. The only thing that this stagnation tells us is that AI is more difficult than some of the early pioneers might have thought, but it goes no way towards showing that AI will remain unfeasible. In retrospect, we know that the AI project couldn't possibly have succeeded at that stage: the hardware was simply not powerful enough. It seems that we need at least about 100 Tops to have human-like performance, and possibly as much as 10^17 ops is required. The computers in the seventies had a computational capacity comparable to that of insects, and they also achieved approximately insect-level intelligence. Now, on the other hand, we can foresee the arrival of human-equivalent hardware, so the reason for AI's past failure will then no longer obtain.
There is also an explanation for the relative absence even of noticeable progress in this period. As Moravec points out:
Alas, for several decades the computing power found in advanced Artificial Intelligence and Robotics systems has been stuck at insect brain power of 1 MIPS. While computer power per dollar fell rapidly during this period, the money available fell just as fast. The earliest days of AI, in the mid 1960s, were fuelled by lavish post-Sputnik defence funding, which gave access to $10,000,000 supercomputers of the time. In the post Vietnam war days of the 1970s, funding declined and only $1,000,000 machines were available. By the early 1980s, AI research had to settle for $100,000 minicomputers. In the late 1980s, the available machines were $10,000 workstations. By the 1990s, much work was done on personal computers costing only a few thousand dollars. Since then AI and robot brain power has risen with improvements in computer efficiency. By 1993 personal computers provided 10 MIPS, by 1995 it was 30 MIPS, and in 1997 it is over 100 MIPS. Suddenly machines are reading text, recognizing speech, and robots are driving themselves cross country. (Moravec 1997)
Once there is human-level AI there will soon be superintelligence
Because there would be strong positive feedback loops (AI helps constructing better AI, which helps constructing better AI, which...), I think we can make the prediction that once there is human-level artificial intelligence then it will not be very long before there is superintelligence.
Apart from employing the superior intelligences as designers of superintelligence, note also that since they are computer-based rather than biological, it might be possible to copy skills or modules of distinguished performers, and to combine the best of several superior intellects into one superintelligence. In general, the intellectual achievements of artificial intellects are additive in a way that human achievements are not, or only to a much less degree.
The demand for superintelligence
Given that superintelligence will one day be technologically feasible, will people choose to develop it? It think we can confidently answer that question in the affirmative. Associated with every step along the road there are be enormous economical payoffs. The computer industry invests huge amounts in the next generation of hardware and software, and it will continue doing so as long as there is competitive pressure and profits to be made. People want the good things that better computers and smarter software can do: they can help developing better drugs, relieve humans from the need to perform boring or dangerous jobs, provide entertainment -- there is no end to the list of consumer-benefits of better AI. There is also a strong military motive to develop artificial intelligence. And nowhere on the road to superintelligence is there any natural stopping point where technophobics could plausibly argue "hither but not further".
It therefore seems that up to human-equivalence, the push for improvements in AI will overpower whatever resistance there might be at that stage. When the question is about human-level or greater intelligence, then it is easy to imagine that there might be strong political forces opposing further development because of the dangers to the continued existence of the human species that superior artilects might pose. The feasibility of various ways to contain that danger is a contentious topic. If future policy-makers can be sure that there would be no artilect revolution, then the development of artificial intelligence will continue. If they can't be sure that there would be no danger, then the development might continue anyway, either because people don't regard the gradual displacement of humans by artilecs as a bad outcome, or because such strong forces (motivated by short-term profit, curiosity, ideology, or desire for the capabilities that superintelligences might bring to its creators) are active that a collective decision to ban new research in this field could not be reached or successfully implemented. (Hugo de Garis thinks that this debate about species dominance might ultimately give rise to a big war between "cosmists", who favour further development, and "terras" who would rather perpetuate the status quo in artificial intelligence.)
Superintelligence is feasible.
Depending on how much we can optimize, to simulate the processing of a human brain requires between 10^14 and 10^17 ops. We cannot even be 100% sure that the value is within this interval; it seems quite possible that very advanced optimization could bring it down further, but the entrance level would probably not be less than about 10^14 ops. If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024. The past success of Moore's law gives some inductive reason to believe that it will hold another ten, fifteen years or so, and this prediction is supported by the fact that there are many promising new techniques currently under development that seem to quarantee great scope for further increase in available computational power. There is no direct reason to suppose that Moore's law won't hold longer than 15 years. It thus seems likely that the requisite hardware will be constructed in the first quater of the next century, possibly within the first few years.
There are several approaches to developing the software. One is to emulate the basic principles of biological brains. It not implausible to suppose that these principles will be well enough known within 15 years for this approach to succeed, given adequate hardware.
The stagnation of AI during the seventies and eighties does not have much bearing on the likelihood of AI to succeed in the future since we know that the causes responsible for the stagnation are being removed.
Nanotechnology would quickly give superior intelligence and most likely it would quickly give superintelligence also. Indeed, any plausible form of superior intelligence would probably give superintelligence before very long.
There will be a strong and increasing pressure to improve AI up to human-level. If there is a way of guaranteeing perpetual obedience of superior artificial intellects to humans, then such intellects will be created. If there is no way to have such a guarantee, then my guess is that they will be created anyway.

Bostrom N. 1996. "Cortical Integration: Possible Solutions to the Binding and Linking Problems in Perception, Reasoning and Long Term Memory". Forthcoming. Manuscript available from

Cohen L. G. et al. 1997. "Functional relevance of cross-modal plasticity in blind humans". Nature, vol. 389, pp.180-38.

DeGaris, H.

Hameroff and Penrose

Moravec, H. 1998. Robot, Being: from mere machine to transcendent mind. Forthcoming Oxford University Press. Preview at

Moravec, H. 1997.

Phillips W. A. & Singer W. 1996. "In Search of Common Foundations for Cortical Computations". (Penultimate draft.) Forthcoming in The Behavioural and Brain Sciences.

Schlaggar, B. L. & O'Leary, D. D. M. 1991. "Potential of visual cortex to develop an array of functional units unique to somatosensory cortex". Science 252: 1556-60.

Sur, M. et al. 1988. "Experimentally induced visual projections into auditory thalamus and cortex". Science 242: 1437-41

White, E. L. 1989. Cortical Circuits: Synaptic Organization of the Cerebral Cortex. Structure, Function and Theory.