The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

The best way of looking at the brain is from within - Inside intelligence the economist down icon down icon user icon down icon magnifier icon hamburger icon close icon up icon up icon up icon print-edition icon twitter icon facebook icon linkedin icon mail icon print icon print-edition icon twitter icon facebook icon linkedin icon mail icon print icon facebook icon twitter icon googleplus icon linkedin icon tumblr icon instagram icon youtube icon rss icon mail icon

TALK to neuroscientists about brain-computer interfaces (BCIs) for long enough, and the stadium analogy is almost bound to come up. This compares the neural activity of the brain to the noise made by a crowd at a football game. From outside the ground, you might hear background noise and be able to tell from the roars whether a team has scored. In a blimp above the stadium you can tell who has scored and perhaps which players were involved. Only inside it can you ask the fan in row 72 how things unfolded in detail.

Similarly, with the brain it is only by getting closer to the action that you can really understand what is going on. To get high-resolution signals, for now there is no alternative to opening up the skull. One option is to place electrodes onto the surface of the brain in what is known as electrocorticography. Another is to push them right into the tissue of the brain, for example by using a grid of microelectrodes like BrainGates Utah array.

Just how close you have to come to individual neurons to operate BCIs is a matter of debate. In people who suffer from movement disorders such as Parkinsons disease, spaghetti-like leads and big electrodes are used to carry out deep-brain stimulation over a fairly large area of tissue. Such treatment is generally regarded as effective. Andrew Jackson of the University of Newcastle thinks that recording activity by ensembles of neurons, of the sort that gets picked up by electrocorticography arrays, can be used to decode relatively simple movement signals, like an intention to grasp something or to extend the elbow.

But to generate fine-grained control signals, such as the movement of individual fingers, more precision is needed. These are very small signals, and there are many neurons packed closely together, all firing together, says Andrew Schwartz of the University of Pittsburgh. Aggregating them inevitably means sacrificing detail. After all, individual cells can have very specific functions, from navigation to facial recognition. The 2014 Nobel prize for medicine was awarded for work on place and grid cells, which fire when animals reach a specific location; the idea of the Jennifer Aniston neuron stems from research showing that single neurons can fire in response to pictures of a specific celebrity.

Companies like Neuralink and Kernel are betting that the most ambitious visions of BCIs, in which thoughts, images and movements are seamlessly encoded and decoded, will require high-resolution implants. So, too, is Americas Defense Advanced Research Projects Agency (DARPA), an arm of the Pentagon, which this year distributed $65m among six organisations to create a high-resolution implantable interface. BrainGate and others continue to work on systems of their own.

But the challenges that these researchers face are truly daunting. The ideal implant would be safe, small, wireless and long-lasting. It would be capable of transmitting huge amounts of data at high speed. It would interact with many more neurons than current technology allows (the DARPA programme sets its grant recipients a target of 1m neurons, along with a deadline of 2021 for a pilot trial to get under way in humans). It would also have to navigate an environment that Claude Clment of the Wyss Centre likens to a jungle by the sea: humid, hot and salty. The brain is not the right place to do technology, he says. As the chief technology officer, he should know.

Da neuron, ron, ron

That is not stopping people from trying. The efforts now being made to create better implants can be divided into two broad categories. The first reimagines the current technology of small wire electrodes. The second heads off in new, non-electrical directions.

Start with ways to make electrodes smaller and better. Ken Shepard is a professor of electrical and biomedical engineering at Columbia University; his lab is a recipient of DARPA funds, and is aiming to build a device that could eventually help blind people with an intact visual cortex to see by stimulating precisely the right neurons in order to produce images inside their brains. He thinks he can do so by using state-of-the-art CMOS (complementary metal-oxide semiconductor) electronics.

Dr Shepard is aware that any kind of penetrating electrode can cause cell damage, so he wants to build the mother of all surface recording devices which will sit on top of the cortex and under the membranes that surround the brain. He has already created a prototype of a first-generation CMOS chip, which measures about 1cm by 1cm and contains 65,000 electrodes; a slightly larger, second-generation version will house 1m sensors. But like everyone else trying to make implants work, Dr Shepard is not just cramming sensors onto the chip. He also has to add the same number of amplifiers, a converter to turn the analogue signals of action potentials into the digital 0s and 1s of machine learning, and a wireless link to send (or receive) data to a relay station that will sit on the scalp. That, in turn, will send (or receive) the data wirelessly to external processors for decoding.

The device also has to be powered, another huge part of the implantables puzzle. No one in the field puts faith in batteries as a source of power. They are too bulky, and the risk of battery fluid leaking into the brain is too high. Like many of his peers, Dr Shepard uses inductive coupling, whereby currents passing through a coiled wire create a magnetic field that can induce a current in a second coil (the way that an electric toothbrush gets recharged). That job is done by coils on the chip and on the relay station.

Over on Americas west coast, a startup called Paradromics is also using inductive coupling to power its implantable. But Matt Angle, its boss, does not think that souped-up surface recordings will deliver sufficiently high resolution. Instead, he is working on creating tiny bundles of glass and metal microwires that can be pushed into brain tissue, a bit like a Utah array but with many more sensors. To stop the wires clumping together, thereby reducing the number of neurons they engage with, the firm uses a sacrificial polymer to splay them apart; the polymer dissolves but the wires remain separated. They are then bonded onto a high-speed CMOS circuit. A version of the device, with 65,000 electrodes, will be released next year for use in animal research.

That still leaves lots to do before Paradromics can meet its DARPA-funded goal of creating a 1m wire device that can be used in people. Chief among them is coping with the amount of data coming out of the head. Dr Angle reckons that the initial device produces 24 gigabits of data every second (streaming an ultra-high-definition movie on Netflix uses up to 7GB an hour). In animals, these data can be transmitted through a cable to a bulky aluminium head-mounted processor. That is a hard look to pull off in humans; besides, such quantities of data would generate far too much heat to be handled inside the skull or transmitted wirelessly out of it.

So Paradromics, along with everyone else trying to create a high-bandwidth signal into and out of the brain, has to find a way to compress the data rate without compromising the speed and quality of information sent. Dr Angle reckons he can do this in two ways: first, by ignoring the moments of silence in between action potentials, rather than laboriously encoding them as a string of zeros; and second, by concentrating on the wave forms of specific action potentials rather than recording each point along their curves. Indeed, he sees data compression as being the companys big selling-point, and expects others that want to create specific BCI applications or prostheses simply to plug into its feed. We see ourselves as the neural data backbone, like a Qualcomm or Intel, he says.

Meshy business

Some researchers are trying to get away from the idea of wire implants altogether. At Brown University, for example, Arto Nurmikko is leading a multidisciplinary team to create neurograins, each the size of a grain of sugar, that could be sprinkled on top of the cortex or implanted within it. Each grain would have to have built-in amplifiers, analogue-to-digital converters and the ability to send data to a relay station which could power the grains inductively and pass the information to an external processor. Dr Nurmikko is testing elements of the system in rodents; he hopes eventually to put tens of thousands of grains inside the head.

Meanwhile, in a lab at Harvard University, Guosong Hong is demonstrating another innovative interface. He dips a syringe into a beaker of water and injects into it a small, billowing and glinting mesh. It is strangely beautiful to watch. Dr Hong is a postdoctoral fellow in the lab of Charles Lieber, a professor of chemistry; they are both working to create a neural interface that blurs the distinction between biology and electronics. Their solution is a porous net made of a flexible polymer called SU-8, studded with sensors and conductive metal.

The mesh is designed to solve a number of problems. One has to do with the brains immune response to foreign bodies. By replicating the flexibility and softness of neural tissue, and allowing neurons and other types of cells to grow within it, it should avoid the scarring that stiffer, solid probes can sometimes cause. It also takes up much less space: less than 1% of the volume of a Utah array. Animal trials have gone well; the next stage will be to insert the mesh into the brains of epilepsy patients who have not responded to other forms of treatment and are waiting to have bits of tissue removed.

A mile away, at MIT, members of Polina Anikeevas lab are also trying to build devices that match the physical properties of neural tissue. Dr Anikeeva is a materials scientist who first dived into neuroscience at the lab of Karl Deisseroth at Stanford University, who pioneered the use of optogenetics, a way of genetically engineering cells so that they turn on and off in response to light. Her reaction upon seeing a (mouse) brain up close for the first time was amazement at how squishy it was. It is problematic to have something with the elastic properties of a knife inside something with the elastic properties of a chocolate pudding, she says.

One way she is dealing with that is to borrow from the world of telecoms by creating a multichannel fibre with a width of 100 microns (one micron is a millionth of a metre), about the same as a human hair. That is denser than some of the devices being worked on elsewhere, but the main thing that distinguishes it is that it can do multiple things. Electronics with just current and voltage is not going to do the trick, she says, pointing out that the brain communicates not just electrically but chemically, too.

Dr Anikeevas sensor has one channel for recording using electrodes, but it is also able to take advantage of optogenetics. A second channel is designed to deliver channelrhodopsin, an algal protein that can be smuggled into neurons to make them sensitive to light, and a third to shine a light so that these modified neurons can be activated.

It is too early to know if optogenetics can be used safely in humans: channelrhodopsin has to be incorporated into cells using a virus, and there are question-marks about how much light can safely be shone into the brain. But human clinical trials are under way to make retinal ganglion cells light-sensitive in people whose photoreceptor cells are damaged; another of the recipients of DARPA funds, Fondation Voir et Entendre in Paris, aims to use the technique to transfer images from special goggles directly into the visual cortex of completely blind people. In principle, other senses could also be restored: optogenetic stimulation of cells in the inner ear of mice has been used to control hearing.

Dr Anikeeva is also toying with another way of stimulating the brain. She thinks that a weak magnetic field could be used to penetrate deep into neural tissue and heat up magnetic nanoparticles that have been injected into the brain. If heat-sensitive capsaicin receptors were triggered in modified neurons nearby, the increased temperature would cause the neurons to fire.

Another candidate for recording and activating neurons, beyond voltage, light and magnets, is ultrasound. Jose Carmena and Michel Maharbiz at the University of California, Berkeley, are the main proponents of this approach, which again involves the insertion of tiny particles (which they call neural dust) into tissue. Passing ultrasound through the body affects a crystal in these motes which vibrates like a tuning fork; that produces voltage to power a transistor. Electrical activity in adjacent tissue, whether muscles or neurons, can change the nature of the ultrasonic echo given off by the particle, so this activity can be recorded.

Introducing tiny, flexible materials into the brain creates a wet noodle problem

Many of these new efforts raise even more questions. If the ambition is to create a whole-brain interface that covers multiple regions of the brain, there must be a physical limit to how much additional material, be it wires, grains or motes, can be introduced into a human brain. If such particles can be made sufficiently small to mitigate that problem, another uncertainty arises: would they float around in the brain, and with what effects? And how can large numbers of implants be put into different parts of the brain in a single procedure, particularly if the use of tiny, flexible materials creates a wet noodle problem whereby implants are too floppy to make their way into tissue? (Rumour has it that Neuralink may be pursuing the idea of an automated sewing machine designed to get around this issue.)

All this underlines how hard it will be to engineer a new neural interface that works both safely and well. But the range of efforts to create such a device also prompts optimism. We are approaching an inflection-point that will enable at-scale recording and stimulation, says Andreas Schaefer, a neuroscientist at the Crick Institute in London.

Even so, being able to get the data out of the brain, or into it, is only the first step. The next thing is processing them.

This article appeared in the Technology Quarterly section of the print edition under the headline "Inside intelligence"

Continue reading on www.economist.com