“Uploading the MIND”
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Mind uploading or whole brain emulation (sometimes called mind transfer) is the hypothetical process of scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer would have to run asimulation model so faithful to the original that it would behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably. The simulated mind is assumed to be part of a virtual reality simulated world, supported by a simplified body simulation model. Alternatively, the simulated mind could be assumed to reside in a computer inside (or connected to) ahumanoid robot or a biological body, replacing its brain.
Whole brain emulation is discussed as a “logical endpoint” of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Among futurists and within thetranshumanist movement it is an important proposed life extensiontechnology, originally suggested in biomedical literature in 1971.It is a central conceptual feature of numerous science fiction novels and films.
Whole brain emulation is considered by some scientists as a theoretical and futuristic but possible technology, although mainstream research funders remain skeptical. Several contradictory and already passed attempts have been made during the years to predict when whole human brain emulation can be achieved. Substantial mainstream research and development are however being done in relevant areas including development of faster super computers, virtual reality, brain-computer interfaces, animalbrain mapping and simulation, and information extraction from dynamically functioning brains.
The question whether an emulated brain can be a human mind is debated by philosophers, and may be contradicted by the dualisticview of the human mind that is common in many religions.
2 Theoretical benefits
2.3 Multiple/parallel existence
3.1 Bekenstein bound
3.2 Computational issues
3.3 Philosophical issues
3.3.1 Copying vs. moving
3.4 Legal and economical issues
4 Relevant technologies and techniques
4.1 Simulation model scale
4.2 Scanning and mapping scale of an individual
4.3 Serial sectioning
4.4 Brain imaging
4.5 Brain-computer interfaces
5 Current research
6 Mind uploading in science fiction
7 Mind uploading advocates and critics
8 See also
10 External links
Neuron anatomical model
Simple artificial neural network
The human brain contains about 100 billion nerve cells calledneurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release anddetection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of thisneural network.
Importantly, many leading neuroscientists have stated they believe important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononiwrote in IEEE Spectrum:
“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”
The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.
Many eminent computer scientists and neuroscientists have predicted that computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinas.
Such a machine intelligence capability might provide a computational substrate necessary for uploading.
However, even though uploading is dependent upon such a general capability it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form ofartificial intelligence, sometimes called an infomorph or “noömorph.”
Even if uploading is theoretically possible, the amount of storage and computational power required are difficult to predict. Nevertheless, many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations (citations needed for Boahen, Modha, Izhikevich, Bostrom and Sandberg, others). Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s Law continue.
The prospect of uploading human consciousness in this manner raises many philosophical questions involving identity, individuality and the soul, as well as numerous problems of medical ethics and morality of the process.
 Theoretical benefits
A computer-based intelligence such as an upload could potentially think much faster than a human even if it was no more intelligent. Human neurons exchange electrochemical signals with a maximum speed of about 150 meters per second, whereas the speed of lightis about 300 million meters per second, about two million times faster. Also, neurons can generate a maximum of about 200 action potentials or “spikes” per second, whereas the number of signals per second in modern computer chips is about 2 GHz (about ten million times greater) and continually increasing. So even if the computer components responsible for simulating a brain were not significantly smaller than a biological brain, and even if the temperature of these components was not significantly lower, Eliezer Yudkowsky of theSingularity Institute for Artificial Intelligence calculates that a simulated brain could run about 1 million times faster than a real brain, experiencing about a year of subjective time in only 31 seconds of real time.
Main article: Digital immortality
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby reducing or eliminating mortality risk. This general proposal appears to have been first made in the biomedical literature in 1971 by renowned University of Washington biogerontologist George M. Martin.
 Multiple/parallel existence
Another concept explored in science fiction is the idea of more than one running “copy” of a human mind existing at once. Such copies could potentially allow an “individual” to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to “be many places at once” and “do many things at once”; this concept has been explored in fiction. Such partial and complete copies of a sentient being raise interesting questions regarding identity and individuality.
Futurist Ray Kurzweil‘s projected supercomputer processing power based on Moore’s law exponential development of computer capacity. Here the computational capacity doubling time is assumed to be 1.2 years.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
Henry Markram, lead researcher of the “Blue Brain Project”, has stated that “it is not [their] goal to build an intelligent neural network”, based solely on the computational demands such a project would have.
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.
Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power may become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
 Philosophical issues
 Copying vs. moving
Another philosophical issue with mind uploading is whether an uploaded mind is really the “same” sentience, or simply an exact copy with the same memories and personality; or, indeed, what the difference could be between such a copy and the original (see theSwampman thought experiment). This issue is especially complex if the original remains essentially unchanged by the procedure, thereby resulting in an obvious copy which could potentially have rights separate from the unaltered, obvious original.
Most projected brain scanning technologies, such as serial sectioning of the brain, would necessarily be destructive, and the original brain would not survive the brain scanning procedure. But if it can be kept intact, the computer-based consciousness could be a copy of the still-living biological person. It is in that case implicit that copying a consciousness could be as feasible as literally moving it into one or several copies, since these technologies generally involve simulation of a human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is usually assumed that once the versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.
The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings. The most parsimonious view of this phenomenon is that the two (or more) minds would share memories of their past but from the point of duplication would simply be distinct minds (although this is complicated by merging). Many complex variations are possible.
Depending on computational capacity, the simulation may run at slower or faster simulation time as compared to the elapsed physical time, resulting in that the simulated mind would perceive that the physical world is running in slow motion or fast motion respectively, while biological persons will see the simulated mind in fast or slow motion respectively.
A brain simulation can be started, paused, backed-up and rerun from a saved backup state at any time. The simulated mind would in the latter case forget everything that has happened after the instant of backup, and perhaps not even be aware that it is repeating itself. An older version of a simulated mind may meet a younger version and share experiences with it.
 Legal and economical issues
See also: Ship of Theseus
The only limited resources in a simulated world are computational resources, meaning simulation speed, and intellectual properties. In a simulated society, rich simulated minds may pay for faster simulation time than others.
It may be difficult for authorities to supervise that human rights are not threatened in any computer in the world. It might for example be tempting for social science researchers to expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments, where many copies of the same minds, or repeated reruns of the same simulation, are exposed to different test
“EURON is a shorthand for “EUropean RObotics research Network”. It is the community of more than 225 academic and industrial groups in Europe with a common interest in doing advanced research and development to make better humans.”
The Network brings together researchers and commercial companies working on artificial perception systems to model neuronal functions and cognitive processes, to optimize existing learning algorithms and to realize intelligent artificial systems.
Cyberhand is a project funded by EU Future Emerging Technology Program robotic hand for replacement of lost limbs. The hand is designed to respond to signals from the human nervous system.
BBP is a massive cooperative project of EPFL (Switzwerland) and IBM. It uses IBMs super computer Blue Gene to through reverse engineering copy the whole human brain.
The Berlin Brain Computer Interface (BBCI) is a collaboration between German researchers to develop BCI technology for commercial and medical uses.
A 7 million euros EC-funded collaboration among 15 different laboratories in 7 countries for the purpose of developing virtual reality environments with BCI applications.
A collaborative project of 5 European countries – paralyzed human hand.
Focuses on basic research fusing neuroscience and robotics to design, develop and test, tele-operated robotic systems to help restore personal autonomy to sensory-motor-disabled persons.
Bruce Katz received his Ph.D. in artificial intelligence from University of Illinois. He is a frequent lecturer in artificial intelligence at the University of Sussex in the U.K and serves as adjunct professor in of Computer Engineering at Drexel University in Philadelphia. Dr. Katz is the accomplished author of Neuroengineering the Future, Digital Design, as well as many prestigious journal articles.
Katz believes we are on the cusp of a broad neuro-revolution, one that will radically reshape our views of perception, cognition, emotion and even personal identity. Neuroengineering is rapidly advancing from perceptual aids such as cochlear implants to devices that will enhance and speed up thought. Ultimately, he says, this may free the mind from its bound state in the body to a platform independent existence.
- h+ Magazine Current Issue
- Tweaking Your Neurons
- Mind Uploading and Mind Children
- Building A Device to Keep Your Memories
h+: What trends do you see in cognitive enhancement modalities and therapies (drugs, supplements, music, meditation, entrainment, AI and so forth)?
BRUCE KATZ: There are two primary types of cognitive enhancement — enhancement of intelligence and enhancement of creative faculties. Even though creativity is often considered a quasi-mystical process, it may surprise some that we are actually closer to enhancing this aspect of cognition than pure intelligence.
The reason is that intelligence is an unwieldy collection of processes, and creativity is more akin to a state, so it may very well be possible to produce higher levels of creative insight for a fixed level of intelligence before we are able to make people smarter in general.
There appear to be three main neurophysiological ingredients that influence the creative process These are 1) relatively low levels of cortical arousal; 2) a relatively flat associative gradient; 3) a judicious amount of noise in the cognitive system. [Editor’s note: A person with a high associative gradient is able to make a few common associations with a stimulus word such as “flight,” whereas those with a flat gradient are able to make many associations with the stimulus word. Creative people have been found to have fairly flat gradients, and uncreative people have much steeper gradients.]
All three ingredients conspire to encourage the conditions whereby cognition runs outside of its normal attractors, and produces new and potentially valuable insights.
Solving compound remote associate (CRA) problems illustrates how these factors work. In a CRA problem, the task is to find a word that is related to three items. For example, given “fountain”, “baking”, and “pop” the solution would be “soda.”
The reason CRA problems are difficult, and why creative insight helps, is that the mind tends to fixate on the stronger associates of the priming words (for example, “music” for “pop”), which in turn inhibits the desired solution.
What are the implications of this for artificially enhancing insight? First, any technique that quiets the mind is likely to have beneficial effects. These include traditional meditative techniques, but possibly also more brute-force technologies such as transcranial magnetic stimulation (TMS). Low frequency pulses (below 1Hz) enable inhibitory processes, and TMS applied in this manner to the frontal cortices could produce the desired result.
Second, the inhibition of the more literal and less associative left hemisphere through similar means could also produce good results. In fact, EEG studies of people solving CRA problems with insight have shown an increase in gamma activity (possibly indicative of conceptual binding activity) in the right but not the left hemisphere just prior to solution.
Finally, the application of noise to the brain, either non-invasively, through TMS, or eventually through direct stimulation may encourage it to be more “playful” and to escape its normal ruts.
In the not too distant future, we may not have to rely on nature to produce the one-in-a-million combination [of a high IQ and creative insight], and be able to produce it at will on many if not all neural substrates.
h+: What are some of the issues (legal, societal, ethical) that you anticipate for such technology?
BK: My own opinion is that — except in the case of minors — we must let an informed public make their own choices. Any government-mandated set of rules will be imperfect, and in any case will deviate from the needs and desires of its individual citizens.
What we in the neuroengineering community should be pushing for is a comprehensive freedom of thought initiative, ideally enshrined as a constitutional amendment rather than as a set of clumsy laws. And we should be doing so sooner rather than later, before individual technologies come online, and before we allow the “tyranny of the majority” to control a right that ought to trump all other rights.
h+: What is your vision for the future of cognitive enhancement and neurotechnology in the next 20 years?
BK: Ultimately, we want to be free of the limitations of the human brain. There are just too many inherent difficulties in its kludgy design — provided by evolution — to make it worthwhile to continue along this path.
As I describe in my book, Neuroengineering the Future, these kludges include:
- Short-term memory limitations (typically seven plus or minus 2 items),
- Significant long-term memory limitations (the brain can only hold about as much as a PC hard disk circa 1990),
- Strong limitations on processing speed (although the brain is a highly parallel system, each neuron is a very slow processor),
- Bounds on rationality (we are less than fully impartial processors%
Cognitive Computing Project Aims to Reverse-Engineer the Mind
Imagine a computer that can process text, video and audio in an instant, solve problems on the fly, and do it all while consuming just 10 watts of power.
It would be the ultimate computing machine if it were built with silicon instead of human nerve cells.
Compare that to current computers, which require extensive, custom programming for each application, consume hundreds of watts in power, and are still not fast enough. So it’s no surprise that some computer scientists want to go back to the drawing board and try building computers that more closely emulate nature.
“The plan is to engineer the mind by reverse-engineering the brain,”
says Dharmendra Modha, manager of the cognitive computing project at
IBM Almaden Research Center.
In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain’s abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.
The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country’s best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.
The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.
In October, the group bagged a $5 million grant from Darpa — just enough to get the first phase of the project going. If successful, they say, we could have the basics of a new computing system within the next decade.
“The idea is to do software simulations and build hardware chips that would be based on what we know about how the brain and how neural circuits work,” says Christopher Kello, an associate professor at the University of California-Merced who’s involved in the project.
Computing today is based on the von Neumann architecture, a design whose building blocks – the control unit, the arithmetic logic unit and the memory — is the stuff of Computing 101. But that architecture presents two fundamental problems: The connection between the memory and the processor can get overloaded, limiting the speed of the computer to the pace at which it can transfer data between the two. And it requires specific programs written to perform specific tasks.
In contrast, the brain distributes memory and processing functions throughout the system, learning through situations and solving problems it has never encountered before, using a complex combination of reasoning, synthesis and creativity.
“The brain works in a massively multi-threaded way,” says Charles King, an analyst with Pund-IT, a research and consulting firm. “Information is coming through all the five senses in a very nonlinear fashion and it creates logical sense out of it.”
The brain is composed of billions of interlinked neurons, or nerve cells that transmit signals. Each neuron receives input from 8,000 other neurons and sends an output to another 8,000. If the input is enough to agitate the neuron, it fires, transmitting a signal through its axon in the direction of another neuron. The junction between two neurons is called a synapse, and that’s where signals move from one neuron to another.
Can You Live Forever? Maybe Not–But You Can Have Fun Trying
In this chapter from his new e-book, journalist Carl Zimmer tries to reconcile the visions of techno-immortalists with the exigencies imposed by real-world biology
“The brain is the hardware,” says Modha, “and from it arises processes such as sensation, perception, action, cognition, emotion and interaction.” Of this, the most important is cognition, the seat of which is believed to reside in the cerebral cortex.
The structure of the cerebral cortex is the same in all mammals. So researchers started with a real-time simulation of a small brain, about the size of a rat’s, in which they put together simulated neurons connected through a digital network. It took 8 terabytes of memory on a 32,768-processor BlueGene/L supercomputer to make it happen.
The simulation doesn’t replicate the rat brain itself, but rather imitates just the cortex. Despite being incomplete, the simulation is enough to offer insights into the brain’s high-level computational principles, says Modha.
The human cortex has about 22 billion neurons and 220 trillion synapses, making it roughly 400 times larger than the rat scale model. A supercomputer capable of running a software simulation of the human brain doesn’t exist yet. Researchers would require at least a machine with a computational capacity of 36.8 petaflops and a memory capacity of 3.2 petabytes — a scale that supercomputer technology isn’t expected to hit for at least three years.
While waiting for the hardware to catch up, Modha is hoping some of the coalition’s partners inch forward towards their targets.
Software simulation of the human brain is just one half the solution. The other is to create a new chip design that will mimic the neuron and synaptic structure of the brain.
That’s where Kwabena Boahen, associate professor of bioengineering at Stanford University, hopes to help. Boahen, along with other Stanford professors, has been working on implementing neural architectures in silicon.
One of the main challenges to building this system in hardware, explains Boahen, is that each neuron connects to others through 8,000 synapses. It takes about 20 transistors to implement a synapse, so building the silicon equivalent of 220 trillion synapses is a tall order, indeed.
“You end up with a technology where the cost is very unfavorable,” says Boahen. “That’s why we have to use nanotech to implement synapses in a way that will make them much smaller and more cost-effective.”
Boahen and his team are trying to create a device smaller than a single transistor that can do the job of 20 transistors. “We are essentially inventing a new device,” he says.
Meanwhile, at the University of California-Merced, Kello and his team are creating a virtual environment that could train the simulated brain to experience and learn. They are using the Unreal Tournament videogame engine to help train the system. When it’s ready, it will be used to teach the neural networks how to make decisions and learn along the way.
Modha and his team say they want to create a fundamentally different approach. “What we have today is a way where you start with the objective and then figure out an algorithm to achieve it,” says Modha.
Cognitive computing is hoping to change that perspective. The researchers say they want to an algorithm that will be capable of handling most problems thrown at it.
The virtual environment should help the system learn. “Here there are no instructions,” says Kello. “What we have are basic learning principles so we need to give neural circuits a world where they can have experiences and learn from them.”
Getting there will be a long, tough road. “The materials are a big challenge,” says Kello. “The nanoscale engineering of a circuit that is programmable, extremely small and that requires extremely low power requires an enormous engineering feat.”
There are also concerns that the $5 million Darpa grant and IBM’s largess — researchers and resources–while enough to get the project started may not be sufficient to see it till end.
Then there’s the difficulty of explaining that mimicking the cerebral cortex isn’t exactly the same as recreating the brain. The cerebral cortex is associated with functions such as thought, computation and action, while other parts of the brain handle emotions, co-ordination and vital functions. These researchers haven’t even begun to address simulating those parts yet.
Welcome to the PRESENCCIA project E.U
This Integrated Project will undertake a Research Programme that has as its major goal the delivery of presence in wide area distributed mixed reality environments.
The environment will include a physical installation that people can visit both physically and virtually. The installation will be the embodiment of an artificial intelligent entity that understands and learns from its interaction with people. People who inhabit the installation will at any one time be physically there, virtually there but remote, or entirely virtual beings with their own goals and capabilities for interacting with one another and with embodiments of real people.
Specific subclasses of the installation will be used for the construction of a number of application scenarios, such as a persistent virtual community that embodies the project itself.
The core methodology will be to achieve this through the identification, understanding and exploitation of cerebral mechanisms for presence in conjunction with advances in the underlying technology for mixed reality display and interaction, with special attention to the interaction between people, and also between people and virtual people. Such cerebral mechanisms will be the basis for a core aspect of the IP which is the exploitation of brain-computer interfaces.
Processes within the environments adapt and correlate with the behaviour and state of people, and in addition people are able to effect changes within the environment through thought as well as through motor actions.