May Professor Charles Xavier forgive me, Marvel’s telepathic mutant who is one of the creations of the late Stan Lee and Jack Kirby, but the best fictional portrait of direct communication between human minds that I’ve read to date is the work of a Brazilian author.
If you haven’t read it yet, I strongly advise you to get to know “The Telepathy Are Others”, by Ana Rüsche. If only to get the idea out of our minds that, if we did manage to connect telepathically with another person, the result would only be a silent version of the conversations we are capable of having verbally.
Nothing could be further from what happens in the book. I’ll avoid the “spoilers” – although I can say that the plot involves the protagonist’s trip to Chile, the use of psychoactive substances that derive from the knowledge of traditional populations and the interface between biology and technology. For now, suffice it to say that the most beautiful thing about the telepathy portrayed by Rüsche’s imagination, at least from my point of view, is that the author takes seriously one of the maxims of neuroscience: “The mind is what the brain does”.
In other words: the mind of each of us is much closer to a verb than a noun. Instead of a unitary and constant essence – a kind of tiny pilot, sitting in a cockpit somewhere in his skull, pushing buttons and turning the helm – what we see is a whirlwind of memories, sensations, emotions and desires that accompany and /or overlap with the “I” who is speaking.
That’s why when, in history, different minds touch each other, the experience is infinitely more overwhelming than a simple “Hi, can you hear me?” that we pronounce when we hold the phone to our ear. Instead, it’s like two hurricanes or two tectonic plates touching.
Does this mean that, in principle, it would be impossible to “read someone’s mind” by technological means? Well, two recent studies show the possibilities and limitations behind this idea.
Both rely on the same basic tools: pattern recognition systems based on artificial intelligence (the same ones behind the infamous ChatGPT and “art” generation applications) and analysis of the brain of volunteers by fMRI (functional magnetic resonance imaging). , which accompany cerebral blood flow.
The idea is, first, to present the volunteers with already known stimuli to follow how their brain reacts. In one of the studies, carried out in Japan, the stimuli were basically photographs; in the other, from a team in Texas, narrative verbal language was used (16 hours of podcasts; be patient). The idea is that, knowing how the brain reacts to these stimuli, it would be possible for artificial intelligence to reconstruct unfamiliar images and stories using only brain signals – “reading”, therefore, the mind.
It worked? Yes – to some extent. The reconstructed images even closely resemble those seen by people, while the verbal stories have the same general theme and structure (although many details are different).
What is most intriguing, however, is that artificial intelligence “training” only works when it is individualized. That is, there is no point in using data from one person’s brain to reconstruct the “thoughts” of another. Each brain, it seems, is a very idiosyncratic and chaotic machine whose detailed workings cannot be predicted based on what one sees in other brains. Next time, Professor Xavier.
PRESENT LINK: Did you like this text? Subscriber can release five free hits of any link per day. Just click the blue F below.
#Telepathy #MRI #brain #advances #Reinaldo #José #Lopes