The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

The State of Whole Brain Emulation in 2015 – Udo's Spout Nozzle

When viewed most fundamentally, the brain is an information processing device. Human brains excel not only at performing higher tasks, but they do so by employing a myriad information processing techniques that we are only just discovering right now with the second cultural advent of Machine Learning. Organized clumps of neurons perform a lot of computation using comparatively little energy, too: the typical human brain uses between 20 and 40 Watts of energy to do all of its information processing.

3D reconstruction of the brain and eyes from CT scanned DICOM images. (source: Dale Mahalko)

Yet for all its capabilities, owning a biological brain comes at a steep cost. There are countless ways in which parts of a brain, and at some point inevitably the whole brain, cease to function – this is what we call death; and for patients with brain injuries such as strokes, death indeed comes in episodes, or even gradually as is the case in neuro-degenerative diseases such as Alzheimer’s.

The nature of biological death is two-fold: first, the hardware ceases to function, so information processing stops. If only parts of the brain stop functioning, you might experience loss of sensory information, motor control, memory. Every single function making up a person can fail in this fashion. There is a spectrum of failures clinically observable, ranging from no noticeable outage up to complete loss of consciousness.

The second aspect of death is the destruction of the apparatus that contains and processes information. In CS terms, not only does information processing stop, but the infrastructure necessary to run these processes is lost. Contrary to classical information technology, hardware and software are not entirely separate in neurobiology.

CT-scan of the brain with a massive middle cerebral artery infarct, region of cell death appears darker than healthy tissue. (http://commons.wikimedia.org/wiki/File:MCA_Territory_Infarct.svg)

In a lot of ways, the hardware offered to us by biochemistry is capable of amazing feats. Our neuronal architecture is excellent at performing statistical data processing, which incidentally is a big portion of what’s required to make sense of the world around us. In contrast, silicon-based computers excel at running deterministic operations, such as calculations and stringent logical reasoning. Both architectures can emulate the other, though. Human brains are Turing-complete and can perform any action that can be performed by a computer. We may not be as good or as fast, but we can do it in principle. Likewise, computers can perform the types of operations predominant in our brains, but again not as quickly as a blob of living matter might. The important point is that these two architectures are compatible in principle.

Given the capabilities and drawbacks of each, biological and synthetic information processing, it makes sense to aim for a fusion product of both. What if we could transpose our minds onto a less tenuous non-biological substrate? The idea to combine classical and biological computing in order to overcome the limitations of both is not new. The benefits would be immeasurable and instantaneous: gaining the ability to make backups of minds, and an untold potential for further growth and development.

So, given the obvious advantage of cheating death, why are we not living in silico by now?

Step 1 – Extracting the Information

Golgi-stained neurons from somatosensory cortex in the macaque monkey, from brainmaps.org

This is what the hardware of the brain looks like, at the neuron level. You might be tempted to think of a neuron as the biological equivalent of a transistor or a memory circuit, and it certainly does have some of these properties, but the most important difference to recognize is that there is a huge multitude of different neurons. They come in a lot of different shapes and models, and each neuron is configured differently.

In a classical computer, the information it contains, the software that processes the information, and the hardware that enables the programs to run, are all separate facilities. In the brain, however, all these are linked. The information stored in a single neuron is linked to its working configuration.

In order to transition a mind from working in vivo to a virtual substrate, we need to copy its essence from the biological clump of matter. This means extracting all of the structure, the neuronal configuration in its entirety. Each neuron has connections to other neurons, so we need to capture those connections. Neurons operate on different chemical models, so we need to get the neuron type as well. Furthermore, neuronal behavior is often modified individually by complex proteins, we need to know these too. Oh, and by the way, the cells surrounding neurons (such as astrocytes) perform computing tasks as well, need to scan them as well.

Pyramidal neuron from the hippocampus, stained for green fluorescent protein

You can see getting all this information, in some cases down to the molecule, is extraordinarily difficult. In the cortex slide presented above, you can just about make out the connections between the neurons. Given thin-enough slices of the entire brain, we might just be able to reconstruct those connections into a computer model with today’s technology. However, we are far from getting the other information I mentioned. Identifying patterns in optical microscopy requires the use of staining agents, and there is a limit to the number of useful staining that can be applied to a given sample – so this is never going to be detailed enough. Electron microscopy might do it, but we’d need some serious post-processing to identify the presence of important proteins in a cell. On top of that, whole-brain EM scans would be a logistical impossibility considering today’s hardware.

Large-scale scan of human brain

Right now we are certainly nowhere near the point where we can make usable electron microscope slides of an entire human brain. This will probably change as we make progress in image processing AI. Ideally, this process would be an automated destructive scan where are brain is placed in a machine that sequentially ablates layers of cells and takes high resolution EM pictures of each layer.

Ideally, it would include not only the neocortex but the whole brain as well as the medulla – or even the whole body if feasible. While we are primarily interested in capturing the higher level functions of the neocortex, we also need knowledge about the wiring at the periphery. Gathering a whole body picture will enable us to make sense of the circuitry more easily, even if we end up throwing most of the data away. It is likely sufficient to use ordinary LM scans in order to capture body data. I am not aware of any project aimed at creating a cybernetic simulation of physiological systems from whole-body microtomes, but it seems like this would be a necessary prerequisite for brain emulation.

So how are we doing on this front, in 2015? We are now routinely using microscopic imaging to make neural models, but since we are still in the basic research phase we’re only doing it for generalized cases. At this point, I am not aware of any effort to capture the configuration of a specific brain for the purpose of emulating its contents. The Whole Brain Project has put out the Whole Brain Catalog, an open source large-scale catalog of the mouse brain – but detailed information about neuronal connections is hard to come by. We are still working on a map of a generic Drosophila connectome, so capturing a mammalian brain’s configuration seems as far off as ever. On the other hand, proactive patients are already generating 3D models of cancerous masses obtained by MRIs, so there is certainly hope that technological convergence will speed up this kind of data gathering and modeling in the near future.

Step 2 – Making Sense of the Information

Suppose we managed to extract all the pertinent structural and chemical information out of a brain, and we are now saddled with a big heap of data from that scan. What we need to do with in order to make that mind “run” on a virtual platform depends largely on the type of emulation we have in mind.

It’s all about detail. There are simulations in biology that aim to accurately depict what goes on in a cell at a molecular level. Here, interactions between single proteins are simulated on a supercomputer, requiring massive amounts of memory and processing power. If we were to “plug in” detailed brain scan data, we could do so relatively easily without a lot of conversion: for every molecule identified in the scan, we’d simply put its virtual counterpart into the simulation. However, simulating even a few single neurons in this fashion would quickly take up all the processing power of a whole supercomputing facility. This is obviously not practical.

Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics – detailing a process in Sinocerebellar ataxia (SCA)

The solution is to look at the outcome of those molecular interactions. It turns out, the products of chemical processes are relatively regular and dependable. Given the right conditions, 2 H2 and 1 O2 will always combine to form 2 H2O. We can use that observational knowledge of chemical processes and make a straight-forward mathematical model of the expected behavior of a neuron – and then we can run that simplified model on a computer very easily.

This means we can solve the computing power issue by using smarter mathematical stand-ins for chemical processes. But now we have two problems: how much can we simplify neuronal behavior and still get enough fidelity to run a human mind without any perceptible loss? And how do we translate the data from our scan into a representation that is faithful to the original yet yields itself to relatively efficient computability?

The best answer from the view point of today’s knowledge about neuronal information processing may be that we should choose a detail level that emulates the behavior of cortical columns plus maybe some carefully-chosen single neurons. Cortical columns are great to emulate because they provide units of functionality with an abstraction level high enough to be easily computable yet still low enough to reflect rich detail, although it is presently true that given an EM scan of a single column element (or neuron for that matter) we would not have enough knowledge about its individual function to accurately translate it into a digital representation. But we’re working on it.

Cajal Blue Brain: Magerit supercomputer (CeSViMa) – The system maintains the cluster architecture with 245 PS702 nodes, each one with 16 cores in two 64-bit processors POWER7 (eight cores each) 3.0 GHz, 32 GB of RAM and 300 GB of local hard disk. Each core provides 18.38 Gflops.

The Blue Brain Project aims to reverse-engineer mammalian brains and then simulate them at a molecular level. This momentous effort has yielded a lot of detailed knowledge about how neurons and cortical columns work, and how they can be simulated. However, the project is occupied with basic research and simulates cellular processes in high detail. While the results generated by it are essential, this is not an effort that allows us to meaningfully run entire minds on a computer – something to keep in mind when reading press reports about the Blue Brain Project.

Step 3 – Running Minds in silico

So we have found a way to digitize brains, translate the information from that scan into a representation that can run efficiently on a classical computer, what happens when we actually execute that code?

Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation

Compared to the steps before, this one is relatively easy. Once we found a good and efficient model framework that can run a digital representation of a brain efficiently, this functional core needs to be executed in a digital milieu that provides connectivity to (emulated) peripheral sensory and motor neurons, as well as a simulated body chemistry. In order to run a brain, we’ll need a functioning endocrine system as well. While we know how to do this in principle from cybernetic models, there are of course still some knowledge gaps to fill as to the management and representation of a virtual body’s state.

Discussions still rage about the feasibility of mind uploading. From my perspective, there are massive technological and scientific impediments still to overcome but nothing in particular seems to prevent this development from playing out.

Some researchers dismissively address the prohibitive computational loads required to run a full-scale simulation of a brain, but the verdict is still not in about methods that emulate higher-level structures such as cortical columns efficiently. It seems to me that once basic research provides useful mathematical abstractions about the behavior of brain components, there is no reason why biology and classical information processing could not meet half way at a point where computation does become feasible at scale.

Moving Forward

We are at an interesting junction in our technological and scientific development. Computational resources are comparatively cheap, we are in the midst of a new wave of AI algorithms allowing for more sophisticated data processing, and there are a lot of interdisciplinary scientists and engineers who could work on this.

However, there is a big problem. Aside from a few laudable exceptions, research data is not available to the public at large. Heck, it’s not even available to competing research institutions. Considering how the internet was once envisioned as a medium for publishing and interlinking research data, this is still one of its unfulfilled promises. Press releases about discoveries made by well-funded projects often lure us as a civilization into a false sense of accomplishment, because more often than not the specifics of those discoveries remain inaccessible.

It is easy to fall victim to the misconception that whenever, say, the Blue Brain Project puts out another press release, we are moving closer to moving our brains into silicon. This is not true. Access to basic research data is tremendously restricted and, no matter how press releases are worded, the scientists mentioned rarely actually work on or towards this specific goal. For the most part, veiled allusions to mind uploading are merely used as convenient science fiction references to generate public buy-in. Pharmacology is what pays the bills, not pie-in-the-sky mind uploading.

Liberating Research Data

It is easy to see that we could be on the threshold of a golden age of citizen science, potentially increasing our overall science and engineering output in an unprecedented way. Access to cheap high tech, 3D printing and modeling, and the infrastructure for rapid information interchange is in place. All we need now is access to the actual body of human knowledge. Not the summarized form that’s in Wikipedia, but actual research data, including both free access to papers and publications, but also – and this might be an even more problematic selling point – access to the raw data as well.

If we could convince a critical mass of research groups to go fully open source, humanity as a whole stands to make the next big leaps. However, if this open sourcing does not happen, research will remain in walled gardens and it will move along the very predictable paths of carefully incremented progress – enough to get a competitive edge in pharma, but insufficient to upset the status quo.

And make no mistake, brain emulation, as any other radical endeavor, is all about upsetting the status quo. Because of this fringe component, progress in this area will likely come from outside of big-budget research facilities. It may even make progress based on the efforts of hobbyists – such as biomedical researchers engaging in side projects. The question becomes first and foremost, what can we do to enable them?

Attribution

Continue reading on creativepark.net