Written by Sam Brinson
Edited by Maisha Razzaque
Brain to Brain Communication
Envisioning a direct link between human minds
Imagine yourself driving along a deserted road at night, when a large deer runs into your path, causing you to make a hard turn to avoid a collision. You skid off the side of the road, the car hits some shrubs and flips upside down. You're hanging in your seatbelt, tangled to the point you can't get yourself free. You need to call for help, but you can't find or reach your phone in the mess.
But then you remember, there's a device in your brain with the ability to transmit your thoughts to someone who can help. With a specific mental operation, a unique thought, you turn the device on and connect with an emergency operator. You give them all the details without uttering any noise, and soon there's someone on their way to get you.
If a brain-to-brain interface is realized, situations like this might not seem like science fiction. Channels for communication have proliferated with the advance of technology, and a direct link between brains might be the next big development.
Neuroscience From Underground is
a Psychology and Neuroscience Blog
~~ Support Us Today on Patreon ~~
5% of our annual proceeds are donated to the John Templeton Foundation.
Also please take just a very quick moment to share this article on twitter and facebook.
Where we are now
Grau et al (2014) demonstrated that an email containing either ‘hola’ or ‘ciao’ could be sent and received using brain signals. However, the signals were encoded as “streams of pseudo-random bits representing the words,” rather than the participants thinking of the letters or words directly. The signal appeared as flashes of light in the visual field or the receiver, rather than any experience of the words themselves.
In the future, we'll want to be able to transfer a full conversation, and in short order. We speak at a rate in the range of 100-150 words per minute, and Brysbaert et al (2016) estimate the average American has upwards of 42,000 words in their repertoire. Incredible as the current technology is, we still have a way to go before we can transfer complex ideas and have meaningful discussions.
The current push to connect the brain to machines, whether it's to control robotic limbs or to move a cursor around a screen, suggests that the technology itself won't be an issue for long. Several different ideas are being developed in order to read brain signals at high resolution while aiming to be minimally invasive. Time will tell how good this can become, but we're on the right trajectory.
Perhaps a larger obstacle is figuring out what people are saying, particularly when they're saying something new. It's a cumbersome challenge linking up that large vocabulary with the neuronal codes for each word, but how would we figure out what a new neural code means? And for the receiver, how would we know which neurons to stimulate for them to experience a word they aren't familiar with?
One way around the problem could be to go beyond the neural codes for words, and instead figure out how certain qualities of words are produced—such as syllables, or the sounds themselves. There are implants that take external sounds and introduce them to the brain, allowing the deaf to hear. Meanwhile, Anumanchipalli et al (2019) successfully created synthetic speech from brain signals. Combining these two techniques might offer a route to brain-to-brain communication.
If/when we achieve this lofty goal, a number of interesting issues could make their way into the equation—will we stop talking to each other and instead rely on this silent form of communication? Will we feel obligated to be constantly available? Will we be able to transfer more than language, such as skills and knowledge? Will we talk to our devices, apps, and appliances? Will we be at risk of being hacked or inundated with spam?
Language and communication have been indispensable to cooperation and innovation. We have the world we do today because of this ability to express, understand, and question each other. But as time goes by, the way we do this might change dramatically.
- Grau C, Ginhoux R, Riera A, Nguyen TL, Chauvat H, Berg M, et al. (2014) Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PLoS ONE 9(8): e105225. https://doi.org/10.1371/journal.pone.0105225
- Brysbaert M, Stevens M, Mandera P and Keuleers E (2016) How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s Age. Front. Psychol. 7:1116. doi: 10.3389/fpsyg.2016.01116
- Anumanchipalli, G., Chartier, J. and Chang, E. (2019). Speech synthesis from neural decoding of spoken sentences. Nature, 568(7753), pp.493-498.
Andrew Neff ~ July '19
Are some nationalities more conscientious than others?
Neel Shah ~ July ‘19