Image by Getty / Futurism
Myers-Briggs. Astrology. BuzzFeed quizzes that tell you what kind of bread you are, according to your favorite Twilight quote. Based on their abundance alone, it’s safe to say that people — whether they’re seeking to self-categorize as a means of self discovery or simply as a way to quell some existential dread — really, really love a personality test.
And sure, feeling aligned with a personality test is usually pretty harmless. But according to a new study published in the journal Consciousness and Cognition, our willingness to believe existential personality label-makers may have serious implications for the ongoing rise of both brain-computer interfaces and AI.
To test how effectively they could trick individuals into identifying with a BS personality subscription, an international team of Canadian and Swedish researchers used social media — where astrology, pop psychology, and other personality-ascribing systems are prevalent — to recruit 62 participants, who were under the impression that they were testing a new type of neurotech that, according to the study, “could accurately read their thoughts and attitudes.”
“To begin our hoax scenario, we intended to build participants’ trust in the machine by pretending that it could decode their preferences and attitudes,” the study authors wrote. “The system included a sham MRI scanner and an EEG system, that supposedly used neural decoding driven by artificial intelligence (AI).”
Per the research, the fake system was relatively elaborate, with the procedure featuring “three main phases: basic brain reading, human error detection, and attitude feedback.”
During each phony “phase,” each participant was asked to evaluate how much they agreed with various personality and opinion-related prompts. The fake system, meanwhile, falsely worked to “accurately decode” answers that they held “unconsciously in their brain and can be decoded from neural activity.”
In other words, participants were made to believe that using advanced neuroscience, the machine could tell them what they thought, not just how they thought. And according to the study, participants ate the results right up, convinced that the machine knew them better than they knew themselves.
“As the machine seemingly inferred participants’ preferences and attitudes, many expressed amazement by laughing or calling it ‘cool’ or ‘interesting.’ For example, one asked, ‘Whoa, the machine inferred this? … Oh my god how do you do that? Can we do more of these?’
“None of the participants,” they continued, “voiced any suspicion about the mind-reading abilities of the machine throughout the study.”
While the researchers did note a few caveats to their work, the fact that no one voiced any suspicion about the machine’s efficacy — how it worked, if it was really capable of reading minds — is indeed spooky. It certainly says something about the willingness of humans to be categorized, particularly by a force outside of themselves. But perhaps to an even greater degree, it says a lot about how much faith people put in algorithms and machines, especially when you consider that a great deal of the “preferences” and “attitudes” that the researchers’ machine presented were prickly political outlooks.
“In sum, we believe that our admittedly uncommon and elaborate paradigm may help produce realistic reactions to future neurotechnologies,” the authors concluded, warning of the potential implications of what such a machine might be capable of in the future. “This paradigm offers promise in emulating these neurotechnologies to better understand and prepare for their eventual consequences.”
READ MORE: Emulating future neurotechnology using magic [Consciousness and Cognition]