According to a recent article in the current issue of Popular Science, Jaan Tallinn, an Estonia born computer programmer who also has a background in physics has lately been conducting a campaign to potentially save us from an artificial intelligence which might eventually reach the point where we can no longer control it. After co-founding Skype in 2003, and eventually developing a back-end for the app, he cashed in his shares after eBay bought it two years later, and began focusing his attention on just how far A.I. might intrude into our culture and into our daily lives. Although A.I.s had previously only inserted themselves into specific areas such as cleaning the kitchen floor, playing chess, or recognizing human speech, Tallinn believes that it is possible that eventually they might be able to broaden their capacities to manipulate humans in dangerous ways through data generated by our smart phones; even to the point where ultra smart A.I.s will be able to one day take our place on the evolutionary ladder in dominating us the way we now dominate apes. Or even potentially exterminate us.
Although much of this might seem like the stuff of science fiction, similar to how the massive computer HAL dominated the two astronauts in Stanley Kubrick’s film 2001: A Space Odyssey, Tallinn has indeed been entirely serious about his mission to potentially save us from our digital creations by using the world of Artificial Intelligence itself to combat them. In addition to funding programs which might be involved with this purpose of using A.I. against itself for the purpose of abrogating threats to humanity’s survival, he has likewise founded organizations such as The Cambridge Centre for the Study of Artificial Risk (CSER), which he began in 2012 with two other scientists who had similar concerns concerning how humans might be manipulated in the future by various forms of artificial intelligence.
According to the article in Popular Science, the ultimate goal of Tallinn and CSER is to produce A.I. safety research that will eventually create machines that, according to CSER co-founder and Cambridge philosopher Huw Price, will be “ethically as well as cognitively superhuman.” In response to this, however, other A.I. researchers have raised the question if we don’t want A.I. to dominate us, do we want to dominate it? In other words, does A.I. actually have rights. In response to this, Tallinn has said that this argument is irrelevant because it assumes that intelligence equals consciousness, even going as far to say that consciousness is beside the point because, as he has been quoted saying, we don’t have the luxury of worrying about consciousness if we haven’t first solved the technical challenges to our safety and continued survival as a species.
Supposedly, many A.I. researchers are annoyed by what they consider to be a misconception in equating intelligence with consciousness. And if one looks at consciousness in terms of its more limited definition – to simply be aware of one’s environment to the point where one might be able to interact with it – it is easy to see why any attempts to equate this particular narrow definition with artificial intelligence itself might indeed rankle those same researchers. Yet at the same time, if one is able to imagine a more expansive consciousness – one which might in fact be some form of a higher intelligence – then one’s perspective on consciousness and intelligence in relation to artificial intelligence potentially begins to take on an entirely different hue; one that might actually have to do with saving us as humans in ways that might be entirely outside the capabilities of even the most complex, advanced machines.
Perhaps attempts to investigate a more expansive consciousness, provided one chooses to use that term, are misnomers of sorts simply because they imply being conscious from a center, a me. Whereas in terms of a consciousness which might transcend what we assume to be our usual, limited definitions of intelligence, and for that matter any sort of artificial intelligence, there is no center. In point of fact, that might be the very definition of a larger intelligence and more expansive consciousness; that it doesn’t operate from a center. On the other hand, A.I.s, it would seem, will always be doomed to operate through a center simply because they have no capacity to conceive of a world that might exist outside their own activities. In addition, of course, they have no capacity for the intuitive moment of direct insight that comes into existence when the emotive and cognitive worlds have become one.
Jaan Tallinn might indeed be right to worry about how A.I. machines could one day take control of our daily lives in ways that are hazardous to our continued existence as a species. Yet at the same time, there might be a threat to our continued evolution as a species which could be just as dangerous. That is, if digital machines that are part of the world of artificial intelligence begin to dull both our cognitive and emotive capacities in ways that prevent us from searching for a larger consciousness and a more insightful intelligence. That is, diminishing things like our working memories, our capacity for creative absorption, or our potential for a highly receptive inner life. For if that begins to occur, our ultimate purpose as humans, the search for that something larger, might begin to likewise vanish.