You see a lot of strange conversations carried out online. Thanks to the anonymity afforded by the screen, people are brave, typing words they’d never say to anyone’s face in real life. Sometimes this is a good thing as people find the love of their life; sometimes it’s bad as the rate of online bullying is skyrockets. The latest negative assigned to online communications is the fallout left in the wake of the Facebook Artificial Intelligence Research (FAIR) team’s controlled experiment.
This past July, FAIR conducted an experiment that followed the digital discussions held between two AIs, nicknamed Bob and Alice, and the results rocked the world. Not because Bob asked Alice a/s/l or because Alice trolled Bob. Their dialogue didn’t involve anything so familiar. In fact, it was downright confusing. Their conversation went a little like:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Confused yet? Most people are. They’re also scared. Many people think it’s the first step to Singularity— a hypothetical momentwhen AI develops sentience and takes over the world — a future event so heinous it deserves capitalization.
A sentient AI isn’t anything like Alexa, Siri, or Google Assistant judging your Internet search history or lack of calendar dates. Nor is it a bot determining which ads you see as your browseFacebook —a phenomenon that relies on your previous browsing habits to decide that an iPhone wrap from dbrand is more enticing to you, an iPhone 7 user, than an ad for handmade vegan ice cream in California. At least, you would have found the iPhone 7 wraps at dbrand.com on your own, but there’s no excuse for those random ads that seem to misjudge both your interests and your location.
Singularity marks a time when computers are so far advanced that the rest of humanity is irreversibly changed. It’s machine learning on steroids, signalling a time when computers can truly learn from all data and adapt their behaviour accordingly. These AIs would be capable of making complex decisions without relying on rudimentary, man-made ‘if x, then y’ type coding. In basic terms, they’d have a mind of their own and they’d turn it on us.
This is why some people have labelled the Facebook’s experiment as the first step to the robotic revolution. Fans of I, Robot know this won’t bode well for the human race. Between this and Trump’s most recent threats to North Korea, it’s no small wonder any of us get out of bed each day.
If this sounds alarmists, it’s because it is. While this article can’t say with certainty about the outcome of Trumps words on nuclear war, it can provide some relief in regards to our AI overlords.
The nonsensical discussion held between Bob and Alice is actually an excellent opportunity for computer scientists to study the inner processes completed by AI. Dhruv Batra, one of the Facebook employees involved in the experiments, says inventing languages, “is a well-established sub-field of AI, with publications dating back decades.”
Batra refers to Natural Language Processing or NLP for short. This field of computer science studies interactions between computer and human (also considered natural) languages. The challenge this field faces is how a computer interprets programming.
In FAIR’s case, as in most other examples, this relies on machines’ ability to understand the meaning and sentiment behind the English language. Success at translation has application in automated chat bots that detect consumers’ needs and facilitate the appropriate services. Think, as an example, chatbots that help you schedule a cab.
But perhaps we should think of it another way. Computers are able to compute at speeds incomprehensible to the human mind. While we most often attribute this ability to math equations, it can be applied to language prompts.
Back in May, the Facebook Artificial Intelligence Research (FAIR) team created code facilitating language translation at 9 times the speed of recurrent neural systems. FAIR’s springtime work influenced the results of Facebook’s summer experiment.
The initial programming is written in English, and over the course of the experiment Bob and Alice tweaked the language to eliminate the features they deemed inefficient or unnecessary. It’s a phenomenon that occurs naturally within any specialized group of people. Since Bob and Alice are 9 times faster at computing language, it happened on a speedier timeline.
With time, scientists can translate their words back into comprehensible language to see what they meant. Despite this opportunity, the media applied its own logic to situation, reporting Facebook “shut down” the chatbots because its scientists were concerned with the results. In reality, no one pulled the plug on the experiment. The parameters of it were changed to explore the AIs’ different reward functions.
Slow moving though we may be, we’re still in control when it comes to experiments like the one carried out by FAIR — at this moment at least. A future in whichBob and Alice are our future overlord isn’t likely. Singularity is a long way off—if it does ever happen. Until then, we’re safe.