Lately, there's been a real speed-up in brain-computer interface (BCI) research, opening up cool new possibilities for treating tough conditions and helping out with chronic diseases that wear you down over time.

Brain Computer Interface (BCIs) are super interesting because they let us chat with machines by just using our brainwaves. Imagine moving stuff on a screen just by thinking about moving your hand, or picking letters to form words with your mind – that’s the amazing world of BCIs for you.

The idea of BCIs isn’t exactly new. It was officially named back in 1973 by Jacques J. Vidal, a smart guy at the University of California, Los Angeles who in the paper titled “Toward direct brain-computer communication” first discussed BCIs as «utilizing the brain signals in a man-computer dialogue» and «as a means of control over external processes such as computers or prosthetic devices».

But, the real beginnings of BCIs go way back to the 1920s, when Hans Berger, a German physicist, found out that our brains buzz with electric vibes that can be measured, kicking off the whole brainwave tracking scene with EEG.

This discovery wasn’t just a big deal for medicine; it also opened the door to using our brain’s activity as a new way to communicate, leading to the development of cool tools like neurofeedback and the birth of the first BCIs. Since then, BCI research has come a long way, evolving from basic experiments to complex systems that help people with serious mobility issues, boost stroke recovery, and even let people control robotic limbs with their thoughts.


BCI research is making waves in healthcare, offering fresh hope for getting back movement and independence for folks dealing with paralysis or brain conditions, and it’s even looking at tackling Alzheimer’s and Parkinson’s down the line.
Non-invasive BCIs are smoothing the way for a tighter bond between brain activity and digital interactions, like turning what we visualize into pictures on a screen.
Despite all the excitement, there are still big hurdles like making sure BCIs are reliable, safe, and easy for everyone to use, especially the types that need you to go under the knife.

Brain Computer Interface: a journey of innovation

In the past few years, it’s fair to say, Brain Computer Interface research has really branched out, introducing new ways of thinking, enhancing tech that goes under the skin, and finding uses for BCIs in everything from medical rehab to video games, and even in creating art.

This has led to a broader view of what BCIs are all about, now seen not just as a way to send commands to apps on purpose, but also as a method to keep an eye on how we’re feeling mentally, aiming to tweak how we interact with tech based on our mental state.

The adventure into brain-computer interfaces is turning what used to be sci-fi into something we can actually get our hands on.

From the early experiments in the ’60s that played around with brain signals to control simple gadgets, to today’s breakthroughs in helping paralyzed people move again and developing treatments for brain diseases, all the way to converting daydreams into digital creations.

As the academic world keeps pushing what’s possible, exploring brain signals with new tools like machine learning or recreating experiences with advanced tech, the potential seems limitless. But for now, we’re zooming in on the medical side of things, saving other exciting developments for another time.

Heading into the world of non-invasive Brain Computer Interface

Brain Computer Interface (BCIs) fall into three big buckets: the non-invasive, the invasive, and the kinda-in-the-middle semi-invasive.

The non-invasive gang gathers brain activity info without any need for surgery, using cool tech like EEG (electroencephalography), MEG (magnetoencephalography), fMRI (functional magnetic resonance imaging), and fNIRS (near-infrared spectroscopy). Among these, EEG is kind of the crowd favourite because it can pick up the brain’s electrical signals through electrodes just chilling on your scalp.

On the flip side, invasive Brain Computer Interface gets up close and personal, recording brain activity via electrodes that are surgically placed right near the neurons they’re interested in, inside the cortex or even deeper inside the brain. They use high-tech stuff like microelectrode arrays (MEA), ECoG (electrocorticography) electrodes, sEEG (stereo-electroencephalography) electrodes, and DBS (deep brain stimulation) electrodes.

Then there’s the semi-invasive crew, like ECoG, which places electrodes on the brain’s surface but doesn’t go all the way in. Even though ECoG and EEG are looking for similar signals, ECoG gives you better picture quality and signal fidelity, and it’s not as easily messed up by noise.

It’s less risky than going full invasive and gives you bigger signal amplitudes than the non-invasive approach. But, you still need to crack open the skull to get those electrodes in, so it’s only done when there’s a medical need for surgery, making it a bit of a hassle for everyday use despite its super signal-snatching abilities.

Compared to the non-invasive options, invasive BCIs have some cool perks like much clearer spatial and temporal resolution, letting them pick up activity from individual neurons or small groups of them; a better signal-to-noise ratio and less trouble with electrical noise or movement mess-ups; and the chance to place electrodes right where you need them, crucial for decoding specific bits of info and tweaking certain brain functions. But, invasive BCIs also have their downsides, like needing a pretty big surgery with all its potential complications, the headache of dealing with hardware issues or updates once the system’s in place, and the hefty price tag that comes with complex surgeries and care afterwards.

Non-invasive Brain Computer Interface: striding towards wearables

When it comes to non-invasive Brain Computer Interfacea paper from the European Space Agency a while back mainly dug into “enhanced communication“, where being super quick isn’t as important as when you’re trying to control robots or neuroprostheses in real-time, basically controlling prosthetics and braces straight from brain signals.

One area they’re exploring is tweaking the interface on-the-fly to keep it perfectly tuned to its user, since how your brain does its thing changes with experience and over time. Using real-time learning can help adjust the classifier while you’re using it, making it better at tasks like driving a wheelchair without bumping into stuff right from the get-go.

Another focus is looking into the brain’s response to certain cognitive and emotional states, like mistakes, alerts, and focus, which could lead to more meaningful interactions. Plus, they’re checking out non-invasive ways to get a clearer picture of the brain’s electrical activity through estimated local field potentials (eLFP), aiming for better motor task sorting and a deeper understanding of the brain activity steering the BCIs.

Lastly, they’re eyeing the impact of getting feedback through multiple senses in brain-controlled tasks to speed up user training and nail down precise robot control, highlighting how touchy-feely feedback might boost learning and skill handling.

Of all the techniques, EEG is the go-to, thanks to being portable, affordable, and easy to use. It picks up the brain’s electrical vibes through electrodes on the scalp. Despite its easy-breezy approach, EEG has its challenges, like signal quality dropping due to the skull and skin messing with the brain’s signals. Plus, it can get thrown off by external noise, including muscle twitches and eye blinks.

Non-invasive BCIs powered by EEG are making waves across fields, from fixing neurological issues to controlling artificial limbs, diagnosing diseases, and cognitive studies. Yet, making these systems comfy and effective for the long haul means tackling some tech hurdles, like boosting signal quality with better electrodes and fine-tuning signal processing algorithms to quickly and accurately figure out what the user wants.

To make non-invasive BCIs more of a daily wearable, research is all about shrinking the gear and blending it with smart wearable tech. Think wireless EEG headbands and other gear like helmets that you can pop on and off easily but still trust to catch your brain’s signals. These advancements are all about weaving non-invasive BCIs into our daily lives, offering big wins in medicine, boosting human abilities, and smoothing out human-machine chats.

Invasive BCIs and their role in getting moving again

Rehab for getting movement back is super important for folks who’ve had a tough time with serious injuries or strokes, aiming to get back what was lost or helping them get used to new challenges.

Strokes, for example, can be a real problem because they stop oxygen from getting to the brain, leading to stuff like losing the ability to talk, memory issues, or even paralysis on one side of the body. Studies have found that brains zapped by a stroke can sort themselves out, and the skills lost can be brought back thanks to something called neuroplasticity.

This is where cool tech like mobile robots and prostheses, which are controlled by brain-computer interfaces (Brain Computer Interface), come into play. They’re new on the scene to help people with their everyday tasks and get them back to normal.

Some rehab methods use brain signals from healthy people to tweak how stroke patients think and act, using virtual and augmented reality to watch and control avatar movements or simulations that mix up injured and healthy limbs. This way, they promote healing through neurofeedback and figuring out how people imagine moving.

Invasive brain recording, where electrodes are placed under the scalp to check out what’s happening in the brain, can go deep into the motor cortex or sit on the surface with something called electrocorticography (ECoG). These methods are top-notch because they give really clear and detailed signals, making the info they grab super clean and clear.

But, they’re not without their problems, like the risk of surgery, being limited to checking out only small bits of the brain, not being able to move the implants around to check out different parts, and the body possibly not getting along with the implants. These issues mean that invasive stuff is mainly used for medical BCI things for just a few people with disabilities.

In BCI research, testing invasive stuff has mostly been done on animals, like monkeys and mice, to see how well they can control movement with electrodes in their brains. Monkeys have managed to move cursors to spots on an imaginary cube, helping figure out how they plan to move and teaching algorithms to predict movements better.

Human studies have mostly been with people who have big challenges, like those with ALS, who’ve been able to move cursors on screens by choosing things after getting an electrode put in their motor cortex.

A cool study a while back tried mixing a brain-to-brain interface (BBI) with a muscle-to-muscle one (MMI) in a loop that closes back on itself. This setup connects artificial pathways (like data streams) with natural ones (nerves). A sender’s intention is picked up by an EEG-based Brain-Computer Interface, triggering transcranial magnetic stimulation (TMS) on a receiver, making their hand move.

At the same time, the TMS tweaks the motor evoked potentials (MEPs) – these are the electrical signals you can pick up in muscles after zapping the brain’s motor neurons or spinal cord – recorded from the receiver’s arm, which then kicks off functional electrical stimulation (FES) on the sender’s arm, leading to hand movement.

They tried this out with human-run loops and automatic ones with 6 pairs of healthy volunteers to see how well it worked. Turns out, the accuracy in the human-run tests was 85%, showing this idea might just work. In the auto tests, two people managed to control hand movements back and forth up to 85 times without stopping.

Current research scoop

We’ve saved a little chat about the latest research buzz for last, mentioning the trailblazers like Bitbrain,NextMind, and Neuralink. In some cases, we’re a bit in the dark about the finer details of their experimentation paths. NextMind, recently snapped up by Snap Inc., is working on a gadget that turns visual cortex signals into digital commands, aiming to create a device that can convert visual imagination into digital signals, making it possible to recreate any image thought up by the user on a screen.

Then there’s Neuralink, kicked off by Elon Musk, which is on a mission to develop implantable brain-machine interface (BMI) devices, like the N1 chip that can directly connect with over 1,000 different brain cells. The goal here is to help people with paralysis regain mobility through the use of machines and prosthetic limbs. They’re also diving into how their tech could help treat conditions like Alzheimer’s and Parkinson’s.

But, even with all the media buzz around this last study, we’re still waiting for the official thumbs-up that experimentation has really started, not to mention the trial’s registration on recognized platforms like ClinicalTrials.gov, a step seen as crucial for ethics and transparency in clinical research.

Glimpses of Futures

Let’s now try to peek into future scenarios, using the STEPS matrix to analyze the impacts that Brain Computer Interface evolution could have from social, technological, economic, political, and sustainability viewpoints.

S – SOCIAL: Modern tech, like brain-computer interfaces (BCIs), brain imaging, and transcranial magnetic stimulation, has opened up new insights into how our brains tick, offering amazing support opportunities for people with disabilities but also ringing alarm bells about potential pitfalls. Digging into the depths of the brain brings not just huge support opportunities but also serious risks of misuse. It’s vital that the development of these technologies is driven by a commitment to ethical use, balancing potential benefits with the need to protect individual integrity and privacy.

T – TECHNOLOGICAL: Beyond the need to solidify ongoing medical studies, the developments of BCIs in other areas, from gaming to entertainment, automation to education, neuromarketing to neuroergonomics, and even new frontiers in space exploration, are pretty exciting.

E – ECONOMIC: The multidisciplinary nature of BCIs is proving its potential positive impact across various sectors, including industry. This diversity hints at a significant economic impact with potential transformations in numerous societal and industrial areas.

P – POLITICAL: The ethical concerns we mentioned about research development, trial safety, and data privacy are pushing various national legislative bodies to issue strict guidelines to prevent abuse. Following Europe and the USA, China has recently, at the start of 2024, adopted an ethical guideline for BCI research, stating it should primarily be used for therapeutic purposes.

S – SUSTAINABILITY: The need for rigorous scientific and clinical validation, along with ensuring research transparency and ethics, highlights the importance of proceeding cautiously, making sure the benefits of these technologies are safely and fairly accessible to all patients. The journey towards full integration of BCIs into daily clinical practice is still long, but current advances offer a promising glimpse into the future of medicine and neurotechnology.

Written by:

Maria Teresa Della Mura

Journalist Read articles Look at the Linkedin profile