Written by Nick Jänne
Edited by Jeremy Chen and Claire Shudde
Illustrated by Jacquelyn Roberts
Humans are mystified by the brain—the software-hardware package behind every book worth reading, every song worth listening to, the start of civilization, and perhaps one day the end of it. Even today, we know very little about what goes on behind the average person’s eyes. And while we have uncovered some fundamental properties of cognitive function in the last 4,000 years, curiosity and incomplete understanding have led to science fiction-level fantasies of what might come to pass one more of the truth unravels. We know the popular ones: “The Force” from Star Wars, or people using tin-foil hats to block potential mind control technologies. Perhaps most recently you’ve heard of Elon Musk’s company Neuralink. To some, this company seems to promise an all-powerful computer implanted beneath your skull, enabling mind-reading control of the world around you. To the tinfoil crowd, the unthinkable possibilities of this technology prompts a shared vein of fear. However, Neuralink is a single runner in the decades-long race to create the next generation of brain-machine interfaces. What’s more, they’re beginning to change people’s lives for the better—perhaps in ways you might not expect.
In the 19th century, Emil du Bois-Reymond discovered that the brain uses electricity to communicate with the rest of our body through a subway network of biological communication channels called nerves1. Just like wires inside a computer, nerves use electrical impulses to carry information between individual cells called neurons. Collections of neurons represent the computational building blocks of the brain’s network, and how we utilize our bodies. Broadly, groups of neurons take on specific functions, like sensing at your fingertips, while others are destined to solve calculus homework. Some neurons, like those in the motor cortex, are responsible for moving our limbs and learning how to cartwheel. In essence, the human body is a congregation of electrically-active, purpose-built modules communicating with one another: a biologically driven computer.
Brain-machine interfaces are devices that act as points of access to the inner workings of this biologically driven computer, the brain. Access is gained through a grid of small carbon electrodes surgically implanted in different sections of brain tissue. Once implanted, these electrodes function like antennae that listen to signals emitted by nearby neurons as they play their part in the larger network. A typical brain machine interface device of today is about the size of a quarter, with each electrode being as narrow as half the width of a human hair.2
Where computers and brains begin to differ is in the complexity of their signals. Computers communicate through an agreed upon language known as an encoding. When you press a key on the keyboard, a binary representation of the letter you pressed is sent out, rather than the letter itself. The computer screen receives this representation, decodes it, and transforms it back into the original letter for display. Similarly, brain signals have their own encodings but unlike computers, we don’t know the rules that make up this language. This conundrum is the fundamental challenge of neural signal decoding: mutual translation between signals our brains can understand, and signals that computers can.
What would be the significance of cracking the neural code? Consider a case where a person has suffered a spinal cord injury and can no longer move their arms and legs, despite them making the conscious effort to do so. At the site of their damaged spinal cord is a communication gap between signals from the person’s brain to move their body and their body actually responding. If brain-machine interfaces can successfully identify and decode this signal to move, perhaps functionality could be restored by “leapfrogging” the injured spinal cord. Short of that, brain machine interfaces might also be able to interpret these signals as control to something else entirely, such as a wheelchair, a set of robot arms, or a truly smart assistive device for those critically impaired.
Monkey test subjects have proven instrumental to the advancement of neural decoding and brain-machine interfaces, despite non-human primate animal research being its own subject of controversy. In 2003, a research group from Duke successfully taught a monkey to control a robot arm with its brain using some video game trickery.3 In this experiment, the monkey is first run through a series of teaching trials where it receives food rewards for accomplishing tasks on a video game screen by moving a joystick. Early-stage algorithms used to decode these neural signals were based on mathematical techniques developed from statistical analysis and estimation. Once the brain-machine interface has been given enough time to learn patterns in the monkey’s brain activity, the joystick is disconnected. Now, instead of the joystick controlling the robot and game movement, the monkey’s decoded brain signals take its place and begin to control the show. While important, unfortunately these kinds of studies have been the subject of accusations of misconduct from animal rights activists. In 2019, Neuralink underwent a probe into violations of the Animal Welfare Act led by the U.S. Department of Agriculture’s (USDA) Inspector General. The USDA found no compliance breaches at Neuralink beyond a 2019 incident that had already been reported, but the case highlights the importance of vigilance and ethical conduct in a field where monkeys have been key to many breakthroughs.4
These early robot control experiments at the turn of the 21st century were profound, but ultimately, offered a coarse decoding of “left”, “right”, “up”, and so on. Over the past two decades, researchers have improved brain-machine interfaces so that the resulting movement is refined, precise, and human. We can now use neural decoding to type on a full keyboard and control dexterous models of multiple fingers that have the potential for a truly replacement-like hand prosthesis. Parallel advancements in computer science resulted in neural networks taking over as the popular choice for signal decoding, given their flexibility and capacity to model increasingly complex signal patterns.
Budding into popular culture, you may have seen a video titled “monkey mind pong” released in 2021 by Neuralink.5 This video shows a monkey successfully playing the classic 70’s cabinet arcade game Pong completely with its mind. The publishing of this video catalyzed a frenzy of new attention on brain-machine interface research, resulting in Neuralink successfully raising over $600 million in the coming three years. And while the technology behind the Pong demo itself has existed for at least two decades, Neuralink and others are beginning to use brain-machine interfaces to positively impact patient’s lives.
Earlier this year, Neuralink announced its first implantation on a human “Patient 0.”6 This trial participant, Noland Arbaugh, is a 30-year-old quadriplegic man. Before the procedure, he interacted with a computer using an interface constructed of sticks and tubes that he manipulated entirely with his mouth. Once fitted with the Neuralink device, a support team spent months tuning and adapting the brain-machine interface to Noland’s specific brain communication patterns. Now fully acclimated, the device enables Noland to operate a computer mouse completely with his mind. In a video shared with the Guardian earlier this year, he describes the technology as “controlling the mouse with ‘The Force,’” in that the cursor seemingly follows on the screen wherever he looks or thinks to move it.7 Noland has been able to rekindle his love for playing online chess and other video games. Delivering a talk in March of this year, Noland stated that moving to the Neuralink device “has completely changed how I live, I’m waking up…excited for the next day, and that’s something I thought would never happen to me–ever again.”8
Fundamentally changing someone’s life for the better is an inspiring debut for any new technology. However, even as brain machine interfaces positively impact more people, it’s important to poke and prod at all the possible ways the next generation of this technology could turn sour. Current research and practical applications mostly center around decoding neural signals in the motor cortex, which humans themselves voluntarily trigger by thinking about or actually performing motor tasks. At the end of the day, there is still an underlying element of consent. However, there is a growing interest in the research community to expand brain-machine interfaces to other areas of the brain and applications beyond rehabilitation from severe injury. These ideas include more advanced virtual reality video game interfaces, while others are more perverse, such as drowsiness detection as an evaluation of human worker performance or “brain signaturing” for real-time lie detection.9 Moreover, since most brain-machine interfaces share much of the same underlying technology, funding, and public support, government oversight must balance the great therapeutic potential with what could be co-opted as a true horror. This warning call doesn’t have to bring brain machine interface research to a screeching halt. Instead, moments of groundbreaking progress such as the story of Noland Arbaugh can serve as a reminder of where our focus should be. There is still time to direct efforts in the advancement of brain machine interfaces towards goals that suit the public well and are founded in principles of consent. The recent controversies over large language models, such as ChatGPT, illustrate what can happen when society is forced to react to a new technology rather than direct it. The brain machine interfaces of today are the least sophisticated they will ever be. Without a collective envisioning for their future, we’ll be caught off guard once again, longing for the days when it was just a game of Pong.

Nick Jänne is a PhD student in Robotics, researching how robots can improve their scope of capabilities in the real world by learning from humans. He also hopes to one day build human habitats on the Moon and Mars using a team of robots and humans. Nick received his Bachelors of Computer Engineering degree from the University of Michigan in 2023, and has a passion for reading and writing on the next generation of artificial intelligence.



