Recently, researchers at the Allen Institute for Artificial Intelligence in Seattle took the time to examine whether machines could complete simple sentences correctly given several choices for how a particular sentence might end. Upon first putting the lab’s A.I. systems to the test, the researchers found that they were able correctly complete the test sentences approximately 60 percent of the time, while human subjects were able to correctly complete them 88 percent of the time. Yet still, according to those experts who had built the machines, 60 percent seemed to be an impressive number. However, two months later, Google researchers were able to unveil a system they called Bert, which was able to correctly complete the sentences at the same rate of success shown by humans.
In fact, recently researchers involved with A.I. have been able to show that computer systems can indeed learn the vagaries of human language and then apply them to a variety of specific tasks which they are given. Expanding on this somewhat surprising development, several independent research organizations have become increasingly convinced that they can improve their technology to the point where digital assistants to people such as Alexa or Google Home will be able to learn the syntax of language well enough that they will actually be able to analyze important documents inside law firm, hospitals, banks, and other businesses; maybe even reaching the point where they can carry on a decent conversation with humans in the process.
Yet for those who might be thinking that the spoken language now being facilitated in these systems could one day actually replace human language and communication in a number of different areas, and indeed this might occur, there is still a certain distinct danger in all this which could have potentially wide reaching consequences. This is simply the fact that the word is never the thing itself; meaning that the syntax of either written or spoken language can never lead one completely toward the experiential reality to which it alludes. And these voice activated A.I. systems, because they have an entirely syntactical rather than an experiential or emotive basis, will continue to skim the reality they express at a level that will remain predominately superficial.
The neuronal networks which are part of our physical brains possess a high degree of what is known to neuroscientists as plasticity. This means they can easily undergo changes in both their structure and function based on how they are affected by our experiences. And because they are endlessly malleable, they are essentially different from the networks inside digital devices which employ various forms of artificial intelligence. Those networks, which are a function largely of algorithms, tend to be fixed until someone changes them externally. That is, there has been very little evidence so far that A.I. systems can in fact regulate their own patterns of intelligence in the same malleable manner that human beings can.
Therefore, when human neuronal networks come in contact with the digital networks that are part of A.I systems, they are coming into contact with something which is more rigid and fixed than the malleable pathways inside our own brains. So because A.I. systems absorb language and speech at a level which is essentially shallower than that of humans simply because that absorption is essentially syntactic rather than experiential, will the speech and language of humans over time, due to our consistent interaction with A.I systems, likewise grow shallower as it skims along the surface of only syntax? That is, will our own malleable neuronal networks be conditioned by the more rigid digital networks to attend to just the word, and not the thing which the word represents?
As the networks of our thinking minds come increasingly into contact with the digital networks of artificial intelligence, will our thoughts themselves tend to be become more rigid and fixed because of how they are being conditioned to do so by the artificial networks of A.I. machines? Or will our own organic networks of thought remain just as malleable as they have always been? This is a significant question which seemingly needs to be asked as we move increasingly further into a virtual, non-human world which might be conditioning us more than we can even imagine.