• About Lyn
    • Reading List
    • Recent Articles and Media Appearances
    • Speaking Services
  • About the Center
  • Blog
  • Books
    • Creative Learning for the Information Age
    • How to Prepare Students for the Information Age and Global Marketplace
    • Intelligence in the Digital Age
    • Learning Not Schooling
    • Our Results-Driven, Testing Culture
  • Contact
  • Home
  • Join the Conversation
    • Book Reviews and Comments
    • Reader Stories
    • Starting Your Own Democratic School
  • Plays and Events
The Denial of the Word in Cyberspace
  Lyn Lesch        October 13, 2019

Just recently, the much anticipated biography of the great intellectual and writer Susan Sontag by Benjamin Moser appeared on the shelves of book stores.   Although Sontag was known for any number of comments on a variety of subjects, from the inherently limiting nature of the photographic image to disease being used as a metaphor, one of her most famously quoted lines was, “Love words/ agonize over sentences/pay attention to the world.” What she most likely meant by this admonition was that by pushing oneself into the meaning of words on the printed page one is in effect connecting him/herself more directly with the particulars of the world in which one lives.

Unfortunately, one of the acutely negative effects wrought by the coming of our current digital age is that people tend to be in the position of increasingly getting their information about the world from predominately visual sites like YouTube or Data Art, rather than from books, papers, and digital web pages where the ability to dive into a piece of writing at length in order to receive significant information on a particular subject is what is required. As a result of this, what may be occurring is that as information and knowledge are increasingly accessed through images rather than through words, the opportunity which people have to connect themselves more fully with their world thought the written word may over time be in the process of being significantly diminished.

Ultimately, words can be fragmented pieces of information that prevent us from seeing the whole about some area of human endeavor simply because as soon as one attaches a word to something it tends to limit it by defining it. And if there is in fact a unitary, expansive consciousness, then that surely exists on the other side of words and thoughts. Yet at the same time, words and thoughts have led to certain towering intellectual achievements, such as Charles Darwin’s Origin of Species or Albert Einstein’s General Theory of Relativity. So without being fully attentive to the meaning of written words, human beings couldn’t possibly have explored their world or their universe as fully as we have done.

At the same time, one has to wonder just what might occur if people grow too attached to the pictures, images, and videos inside their phones or PCs as primary sources of information. Will our capacity for exploration begin to grow limited simply because we will be less acquainted with how to effectively use words and thoughts as tools for our our explorations? And will we be less able to fully sink into the images and pictures that are presented to us in the digital world simply because we no longer have the written word to effectively guide us in doing so?

There is also the matter of the significance of the written word in the sort of post truth age that we now seem to be in. For now more than ever we all need to be very careful of the meaning of words within various social and political contexts so that we can remain certain that words and the truth remain one and the same. On the other hand, if people increasingly get their knowledge and information largely from pictorial images or videos, there is a significant danger that the truth can be manipulated more easily. Susan Sontag was able to write so brilliantly about a variety of subjects – from the photographic image to the interpretation of art to international relations – primarily because she was so in love with the written word. It would be a shame if this same love of words began to disappear in our current cyber age.

Lyn Lesch’s book Intelligence in the Digital Age: How the Search for Something Larger May Be Imperiled is being published later this year by Rowman & Littlefield and will be available on Amazon.

The Stream of Life and Political Discourse
  Lyn Lesch        September 30, 2019

As most of us already realize, political discourse in our society is growing increasingly rigid and preconceived. That is, politicians and others have staked out their positions on certain issues before even taking time to seriously listen to another person who may have a different viewpoint before having a discussion with them. Of course, in part this dogmatic approach is a function of many people believing that various social and political forces need to be tightly controlled in order to keep things from going awry. That is,they think that just as a powerful river needs to be dammed in order to keep it from overflowing its banks, societal forces need to be controlled in order for them to be dealt with correctly.

Whether its finding an equitable way to be certain that everyone has access to affordable health insurance, how to somehow put an end to the current wave of mass shootings, or how to deal with the issue of climate change – we tend to think that approaches to these particular issues, and others, need to be tightly regulated in order for them to be effectively addressed. And unfortunately, people of the left have now become just as guilty of this sort of myopic approach as those on the right.

Whatever the issue in question – health insurance, gun control, immigration, foreign policy, or climate change – political candidates and others now tend to feel that it is somehow necessary to cling to rigid positions on hot button issues, lest they begin to lose control of their attachment to them. Certainly, the recent Democratic debates in which candidates appeared to be too busy defending their narrow positions on particular issues to genuinely listen to each other would seem to be a prime example of this trend.

Part of the problem may simply be that so many people in our current Twitter universe culture believe that approaches to various societal problems must first be narrowly defined in order that those problems will be properly addressed. That is, rather than through an understanding that by letting go of this need for dogmatic control, the forces in our society, if left alone to resolve themselves without a certain measure of external control by government or other bureaucracies and organizations, might move effortlessly and naturally toward other larger, more significant forces in reaching a point of equilibrium.

The metaphysical thinker Alan Watts in his book The Wisdom of Insecurity made mention of what he referred to as The Great Stream of Life, that universal force that we consistently impede by defining various realities through words, thoughts, and concepts which are never the actual realities which they describe. So we’re never really in touch with those same forces because we have limited their dynamic flow by tightly defining them. Instead, if we allowed them to flow toward where they’re naturally inclined, they would grow larger in scope, thus permitting us to see the full spectrum of societal dynamics of which they’re a part more clearly.

Perhaps what is needed is a new type of libertarian approach in which government and other bureaucracies stop damming the stream of life of which all such issues are a part by the imposition of narrow, myopic solutions, and instead allow events which are part of our social and political discourse to move naturally and effortlessly toward wherever they’re inclined to move. For this to take place, however, would mean fusing political and social concerns with a more expansive view of life in our world; something which might best be described as a flow state in allowing different points of conflict to resolve themselves naturally amidst a larger vision of what might be possible.

To place political and social concerns within narrow definitions that limit them even before they have had the opportunity to effectively assert themselves among us prevents us from envisioning just what type of society we might inhabit. Part of the problem, of course, is that in today’s Internet age with the appearance of the 24 news cycle on cable television, the media is not only more able to define for us how political leaders should look and act, but to also narrowly define the context of important issues. And so we remain not as open as we otherwise might be to potentially larger visions of these issues.

Ultimately, it makes no sense to pigeonhole the forces with which our society must deal by constantly preventing them from expanding into what they might eventually become, particularly when this is done in the name of dogmatic expediency and political correctness. Maryanne Williamson may not be ready to be President, but her appearance at two of the recent Democratic debates may have provided a welcome respite from narrowly defined issues; and in so doing gave us all a slight peak over the edge of what might be possible if we were to open our minds just a little.

Lyn Lesch’s book Intelligence in the Digital Age: How the Search for Something Larger May Be Imperiled is being published later this year by Rowman & Littlefield.

 

Abstract Thought and Memory in the Digital Age
  Lyn Lesch        September 23, 2019

In April 1929, as reported in this week’s issue of The New York Times, a journalist from a Moscow newspaper turned up in the office of Dr. Alexander Luria, a neuropsychologist, complaining of an unusual problem, which was simply that he never forgot anything. When Dr. Luria tested the man, he found that he was able to easily recall long strings of numbers and words, foreign poems and scientific formulas. Decades later, the man was still able to effortlessly recall all of these things. Yet there was one significant problem, which was that he had a hard time understanding abstract concepts.

If there is indeed a certain inverse relationship between the ability to retain specific facts and information, and our capacity for abstract thought, then this is something that needs to be investigated in our current digital age, one in which we are endlessly inundated with facts and information coming at us on our phones and PCs. In short, is our ability to retain specific information, or to know know how to come in contact with it by employing various search engines and websites in effect hampering our ability to think abstractly to the point where we may be significantly losing our capacity for abstract thought simply because we are no longer focusing on it to the degree which we did formerly?

Yet that may not be the real problem. Our capacity to think abstractly is significantly related to how well our long-term memories can retain the facts and information that we use for abstract thought. And although the storage capacity of our long-term memories is veritably limitless, the storage capacity of our short-term memories has specific limitations, ones related to how our working memories take in information. And as those limits are approached or passed, our short-term memories lose their capacity to effectively take in facts and information which they can then pass onto our long-terms memory.

So the problem with Dr. Luria’s patient’s inability to engage in abstract though may not have been that there was anything the matter with his capacity for abstract conceptualization. Instead, it may have simply been that his short-term memory had been inundated with so many facts and so much information that he wasn’t able to pass these on to his long-term memory, where he could then use these to think abstractly. And in considering this unusual incident concerning thought and memory from another age, it may not be so far-fetched to consider its specific implications for our own.

Like the patient who was losing his ability to conceptualize, might we in our current digital age be losing our capacity for abstract thought simply because we are overwhelming our short-term memory with an amount of facts and information to which it had formerly not had to contend, and by which it is becoming easily overwhelmed? And so likewise are we diluting our long-term memories in a manner in which they don’t have access to the necessary amount of information which we need to think abstractly?

In a cyber age which is growing and expanding exponentially, this would appear to be something which we all need to seriously consider; this potential loss of our capacity for abstract thought. For ultimately, this would affect so many different areas of our lives, from our capacity for scientific exploration to our  comprehension of great works of art and literature to our ability to remain psychologically free in a world that is now largely being constructed by the algorithms inside our digital devices. In addition, abstract thought has always been the basis of the growth of human intelligence and innovation throughout mankind’s historical development. Without it, we would almost certainly regress as a species.

Lyn Lesch’s upcoming book Intelligence in the Digital Age: How the Search for Something Larger May Be Imperiled, published by Rowman & Littlefield, will be available later this Fall.

A Potentially Diminished Intelligence in the Digital World
  Lyn Lesch        September 17, 2019

As many people already know, there is a marvelous scene in Stanley Kubrick’s visionary movie from the late 1960s, 2001: A Space Odyssey in which an early ape-man discovers that he can use the bones of dead animals that have been left on the African plain as clubs to drive away wild animals or other early humans who might be menacing him. This of course may well be not only the first tool that humans used to control their environment, but may likewise be the dawn of our concrete intelligence. Rejoicing at his discovery, the ape-man throws one of the bones in the air, it hangs there suspended, and then in one of Kubrick’s miraculous touches it becomes a modern day space station.

Obviously, this brief classic cinematic moment demonstrates the entire progression of human intelligence, from the concrete to the abstract. That is, as our intelligence grew increasingly abstract, in the same way in which a child’s intelligence moves from a stage of concrete operations toward one of abstract conceptualization, we eventually were able to produce great, towering works of abstract thought, such as Charles Darwin’s Origin of the Species or Albert Einstein’s theory of General Relativity; as well as insightful works of brilliant metaphysical speculation, such as are found in the writings of the ancient Chinese philosopher Lao-Tzu  or the great modern teacher Krishnamurti.

The point is that the evolution of our intelligence has always moved steadily from the concrete toward the abstract. Paintings on the walls of caves, the first known use of symbols by early humans, eventually became symbols representing thoughts and words, which then of course led to the spoken word, which then led to the first alphabet and the beginning of written language. People’s thoughts concurrently progressed from those of the more primitive variety to actual scientific investigations of ourselves, our world, and our universe; those thoughts following a distinct pattern of growing increasingly abstract and complex.

Except maybe up until now. That is, the digital devices of our modern cyber world, fascinating as they are, may be just tools in much the same manner that the bone of an animal became a tool for early humans, something that can be used to come to grips with one’s day-to-day existence without having to necessarily understand how and why the tool is so effective. And if those same digital tools come to increasingly dominate our world and our culture, will our intelligence move correspondingly backwards toward one of concrete rather than abstract thought, eventually reaching a place where we can no longer conceptualize ourselves or our world as effectively as we once did?

Unfortunately,  there is evidence that this may be already occurring. Schools now rely increasingly on empirical guidelines to evaluate student learning, such as test scores, rather than looking at that learning more personally and qualitatively. Success in various fields – from musical talent to business to cooking – is becoming ever more a function being judged by panels of critics on reality television. Nearly everything that one purchases online at some site like Amazon is subject to being rated with various stars by the consumers who purchased the product. Even which information one has quick access to on Google or other search sites is subject to rankings determined by digital algorithms.

Of course, except for what is occurring in education, these are all relatively minor examples of thought in contemporary culture becoming more concrete and less abstract. Issues that are more important are things like how our art and literature might be dumbed down as our consciousness grows more concrete and less theoretical. How our direct insight into situations or other people might be similarly diluted. Or how our ability to think creatively in solving either scientific or societal problems might be affected by a diminished capacity to think abstractly and theoretically. These are just some of the issues with which we are going to have to deal in ascertaining how much if any of our intelligence might be in the process of being compromised by this new digital age in which we have now come to live.

Lyn Lesch’s upcoming book Intelligence in the Digital Age: How the Search for Something Larger May Be Imperiled, published by Rowman & Littlefield, will be available later this Fall.

The Internet and the Individual
  Lyn Lesch        August 1, 2019

The late philosopher thinker Krishnamurti said many times You are the world, meaning that what we often believe to be our unique, autonomous selves are really just all the different ways in which we are conditioned by the world in which we live. That is, we believe that, for the most part, our individual selves are unique beings  with unique characteristics, when the truth of the matter is that we are all, for the most part, conditioned by the world in exactly the same ways, and because we don’t have the proper degree of insightful awareness to apprehend this simply because we spend so much time living in our own heads, we’re unable to perceive this fundamental truth.

Now, however, a new type of conditioning has come into the world – the World Wide Web, which conditions us to not just think and act in certain ways, as other force and dynamics in society have previously conditioned us, but also conditions us how to think and how  to access certain information. That is, when we go online seeking information about various subject matters, the neuronal networks in our brains begin to merge with the algorithms and other digital pathways inside whatever virtual site we’re accessing. As a result, we start to lose control of what had once been the natural pathways of our thinking mind.

Even though we tend to believe that we are actually choosing how we navigate the Web as we scroll through it, and likewise believe that we are maintaining our capacity to freely investigate whatever we have become interested in, the fact of the matter is that our investigations are taking place within a much narrower domain than what we believe to be true. This is so simply because the Internet has already collected a plethora of information about us – our likes and dislikes, the information we have searched for in the past, our purchases, even our past associations with other people. And when we employ our thinking minds to search for new information, the domain for our search has often become the narrow result of our past virtual experiences.

On the other hand, when we think about or investigate the particulars of our world in real time and space without having entered the virtual world, there is the possibility that we can do so in a thoroughly open-ended manner. That is, as long as we’re aware of our past conditioning by the non-virtual world of which we’re a part in doing so. Yet when we investigate our world or try to apprehend its particulars while we are online, our past is forever determining the direction of our investigations simply because those investigations are in effect taking place within the realm of our previous virtual experiences. This means of course that we are forever being directed by our past in attempting to  investigate our present.

The larger question, of course, is how much of our personal uniqueness and individualism might be slipping away in a virtual world that is increasingly able to control not just what we think, but even how we think as the digital devices we use might be using the very ways our own networks of thought gather information and learn to shape our own networks of thought. All this as a means of allowing large search engines to expand their business model for the purpose of directing our attention in ways that are advantageous to them financially, such as the ability to sell us an increasing amount of advertising. This is something it would seem we all need to take notice of, as it is very much occurring in present time.

Lyn Lesch’s upcoming book Intelligence in the Digital Age: How the Search for Something Larger May be Imperiled, published by Rowman & Littlefield, will be available this Fall.

Attention and Memory in the Digital Age
  Lyn Lesch        July 21, 2019

This past week in the New York Times Tim Herrera wrote an article You’re Not Paying Attention, but You Really Should Be in which he gave us tips for noticing the real time events occurring all around us in lieu of staring into our smart phones and PCs all day to the exclusion of most everything else. Among his suggestions, gleaned from the experience of people he knew, were to notice what everyone else overlooks, to take time away from everything in order to engage directly with the world around you, and listening to your own curiosities in order to see where they might lead.

Yes, of course people tend to be so focused on their phones, PCs, and other digital devices that they often barely notice the particulars of the world around them which are part of the stream of life which might carry us to all kinds of interesting places if only we would let it. Yet at the same time, one has to ask the question of how much control we still actually have over our attention spans when we have become so conditioned by our daily, obsessive acquaintance with our digital devices that our physiological brains and working memories have become subservient to large search engines like Google or memory devices like Echo or Alexa.

We have become so used to outsourcing our working memories to Google or Alexa by employing them to provide us with knowledge or information that we do not immediately have at our fingertips that the neuronal pathways of our brains have begun to calcify because they are no longer being used to the degree that they once were. As a result, because our working memories are no longer as sharp as they once were, we can no longer focus on the particulars of our world as effectively simply because our working memories have lost much of their capacity to absorb those particulars.

Being attentive to one’s world necessarily means having the capacity to assimilate one’s experience into something meaningful. And that something meaningful is necessarily to be found in our long-term memories, which allow us to assimilate our present experience into the meaningful experience of our past. That is to say, as our working memories become gutted by memory devices, and by information overload in the digital world, our attention will necessarily not be as focused. For there really is no way to separate the two dynamics of memory and attention, each of them being highly dependent upon the other.

If one wants to pay closer attention to other people, to nature, or to any of the vagaries of daily experience, then one really needs to start by making sure that their working memory is fully intact. And subjecting oneself to daily information overload on the Web or consistently using massive search engines like Google to access knowledge, facts, and information, rather than trying to naturally pull those facts and information out of the neuronal pathways of one’s own working memory is the wrong direction to travel to ensure that one is paying close attention to all of the interesting people, situations, and learning experiences which one might encounter during the course of his or her daily life.

Lyn Lesch’s upcoming book Intelligence in the Digital Age: How the Search for Something Larger May be Imperiled, published by Rowman & Littlefield, will be available this Fall. 

Intelligence in the Digital Age
  Lyn Lesch        April 5, 2019

People tend to view the subject of intelligence somewhat myopically. That is, it tends to be equated with capacities such as the knowledge that one has at one’s fingertips, the ability to think rationally at a level that transcends the level of what other people can, or in a more limited way through empirical measures such as an IQ score or what one has scored on his or her bar exam. In other words, it is often viewed largely through a cognitive lens, rather than through the lens of a much broader perspective.

However, there are any number of other qualities which might define what the world intelligence really means. Things such as the capacity for direct insight into the nature of situations or other people; the ability to think creatively by fusing elements of different areas into one; the capacity for developing a clear internal picture of one’s world; the ability to explore what exactly the boundaries of thought and memory are; the capacity for achieving a mental “flow” state that is creative; or the capacity to examine one’s own conditioning by the world through self-reflection.

Unfortunately, however, a number of these qualities may be under assault in our current digital age. For instance the sort of direct insight and creative absorption that is representative of the classic “aha” moment of recognition is being often diminished by the sort of distracted awareness that is being engendered in people as they jump relentlessly between bits of information on the Internet. Or the fact the neuroscientists, psychologists, and others are increasingly finding that the amount of multitasking in which people are now engaged on their digital devices is leaving them less able to transfer learning from one context to another.

Or as people’s short-term memories are often bombarded with the amount of information on the Web that becomes increasingly difficult to absorb, they can no longer convert those memories as effectively into the sort of long-term ones that give them a clear internalized picture of their world. In addition, the high-powered interruption machine of the Internet, one that often breaks our concentration into isolated pieces of knowledge, is making it increasingly difficult to follow one’s stream of consciousness toward a self-absorbed flow state. And as our own neural pathways of our brains are fusing more and more with the digital pathways inside our PCs and phones, it is becoming ever more difficult to step back and effectively examine our conditioning by both the world and the Internet.

It would seem to be important that we begin to look at intelligence through a much broader lens, and to give the qualities alluded to above even more credence than we have before so that they don’t begin to be decimated by our current digital age to the point where they actually disappear simply because we are not paying enough attention to what is occurring. Unfortunately, our current cyber age tends to place way too much emphasis on the more cerebral aspects of intelligence simply because those aspects fit so well with the algorithms inside our PCs and phones. But real intelligence is often more intuitive, subjective, and emotive than that. Let’s hope that we keep that firmly in mind as we move increasingly forward in this new age.

Imagination with a dose of Algorithms
  Lyn Lesch        December 19, 2018

In Helen Schulman’s new novel Come with Me one of the characters in the book is experimenting with algorithms that allow people to play out various alternative virtual reality scenarios that might have taken place in their past. For instance, what someone’s life might have been like if they had married this or that person or not aborted a baby. Or if this or that close friend or relative hadn’t died. In other words, fictional scenarios are created for people which allow them to experience their lives in detail if their circumstances had changed.

Yet as intriguing as this might be from a scientific standpoint, one tries to imagine just what the consequences might be if people begin to re-imagine their pasts while being directed on this journey by the algorithms inside their digital devices, rather than try to imagine what might have occurred in their lives through the organic power of imagination. And if this became a new trend with people, what might this do to our imaginations if we became dependent on algorithms to guide us on this cyber journey into our pasts and our futures?

This then might be one of the real potential future dangers of allowing digital incantations to guide us in musing about our lives; that the organic nature of our imagination itself might be guided by algorithms and codes inside our computers and phones until eventually it becomes just as mechanical as they are. And then if that happens, then we really are being run  by the machines we have designed simply because the one way to still fully distance ourselves from the digital universe is through the power of our imagination.

Language in the Age of Artificial Intelligence
  Lyn Lesch        December 2, 2018

Recently, researchers at the Allen Institute for Artificial Intelligence in Seattle took the time to examine whether machines could complete simple sentences correctly given several choices for how a particular sentence might end. Upon first putting the lab’s A.I. systems to the test, the researchers found that they were able correctly complete the test sentences approximately 60 percent of the time, while human subjects were able to correctly complete them 88 percent of the time. Yet still, according to those experts who had built the machines, 60 percent seemed to be an impressive number. However, two months later, Google researchers were able to unveil a system they called Bert, which was able to correctly complete the sentences at the same rate of success shown by humans.

In fact, recently researchers involved with A.I. have been able to show that computer systems can indeed learn the vagaries of human language and then apply them to a variety of specific tasks which they are given. Expanding on this somewhat surprising development, several independent research organizations have become increasingly convinced that they can improve their technology to the point where digital assistants to people such as Alexa or Google Home will be able to learn the syntax of language well enough that they will actually be able to analyze important documents inside law firm, hospitals, banks, and other businesses; maybe even reaching the point where they can carry on a decent conversation with humans in the process.

Yet for those who might be thinking that the spoken language now being facilitated in these systems could one day actually replace human language and communication in a number of different areas, and indeed this might occur, there is still a certain distinct danger in all this which could have potentially wide reaching consequences. This is simply the fact that the word is never the thing itself; meaning that the syntax of either written or spoken language can never lead one completely toward the experiential reality to which it alludes. And these voice activated A.I. systems, because they have an entirely syntactical rather than an experiential or emotive basis, will continue to skim the reality they express at a level that will remain predominately superficial.

The neuronal networks which are part of our physical brains possess a high degree of what is known to neuroscientists as plasticity. This means they can easily undergo changes in both their structure and function based on how they are affected by our experiences. And because they are endlessly malleable, they are essentially different from the networks inside digital devices which employ various forms of artificial intelligence. Those networks, which are a function largely of algorithms, tend to be fixed until someone changes them externally. That is, there has been very little evidence so far that A.I. systems can in fact regulate their own patterns of intelligence in the same malleable manner that human beings can.

Therefore, when human neuronal networks come in contact with the digital networks that are part of A.I systems, they are coming into contact with something which is more rigid and fixed than the malleable pathways inside our own brains. So because A.I. systems absorb language and speech at a level which is essentially shallower than that of humans simply because that absorption is essentially syntactic rather than experiential, will the speech and language of humans over time, due to our consistent interaction with A.I systems, likewise grow shallower as it skims along the surface of only syntax? That is, will our own malleable neuronal networks be conditioned by the more rigid digital networks to attend to just the word, and not the thing which the word represents?

As the networks of our thinking minds come increasingly into contact with the digital networks of artificial intelligence, will our thoughts themselves tend to be become more rigid and fixed because of how they are being conditioned to do so by the artificial networks of A.I. machines? Or will our own organic networks of thought remain just as malleable as they have always been? This is a significant question which seemingly needs to be asked as we move increasingly further into a virtual, non-human world which might be conditioning us more than we can even imagine.

Consciousness and Artificial Intelligence
  Lyn Lesch        November 25, 2018

According to a recent article in the current issue of Popular Science, Jaan Tallinn, an Estonia born computer programmer who also has a background in physics has lately been conducting a campaign to potentially save us from an artificial intelligence which might eventually reach the point where we can no longer control it. After co-founding Skype in 2003, and eventually developing a back-end for the app, he cashed in his shares after eBay bought it two years later, and began focusing his attention on just how far A.I. might intrude into our culture and into our daily lives. Although A.I.s had previously only inserted themselves into specific areas such as cleaning the kitchen floor, playing chess, or recognizing human speech, Tallinn believes that it is possible that eventually they might be able to broaden their capacities to manipulate humans in dangerous ways through data generated by our smart phones; even to the point where ultra smart A.I.s will be able to one day take our place on the evolutionary ladder in dominating us the way we now dominate apes. Or even potentially exterminate us.

Although much of this might seem like the stuff of science fiction, similar to how the massive computer HAL dominated the two astronauts in Stanley Kubrick’s film 2001: A Space Odyssey, Tallinn has indeed been entirely serious about his mission to potentially save us from our digital creations by using the world of Artificial Intelligence itself to combat them. In addition to funding programs which might be involved with this purpose of using A.I. against itself for the purpose of abrogating threats to humanity’s survival, he has likewise founded organizations such as The Cambridge Centre for the Study of Artificial Risk (CSER), which he began in 2012 with two other scientists who had similar concerns concerning how humans might be manipulated in the future by various forms of artificial intelligence.

According to the article in Popular Science, the ultimate goal of Tallinn and CSER is to produce A.I. safety research that will eventually create machines that, according to CSER co-founder and Cambridge philosopher Huw Price, will be “ethically as well as cognitively superhuman.” In response to this, however, other A.I. researchers have raised the question if we don’t want A.I. to dominate us, do we want to dominate it? In other words, does A.I. actually have rights. In response to this, Tallinn has said that this argument is irrelevant because it assumes that intelligence equals consciousness, even going as far to say that consciousness is beside the point because, as he has been quoted saying, we don’t have the luxury of worrying about consciousness if we haven’t first solved the technical challenges to our safety and continued survival as a species.

Supposedly, many A.I. researchers are annoyed by what they consider to be a misconception in equating intelligence with consciousness. And if one looks at consciousness in terms of its more limited definition – to simply be aware of one’s environment to the point where one might be able to interact with it – it is easy to see why any attempts to equate this particular narrow definition with artificial intelligence itself might indeed rankle those same researchers. Yet at the same time, if one is able to imagine a more expansive consciousness – one which might in fact be some form of a higher intelligence – then one’s perspective on consciousness and intelligence in relation to artificial intelligence potentially begins to take on an entirely different hue; one that might actually have to do with saving us as humans in ways that might be entirely outside the capabilities of even the most complex, advanced machines.

Perhaps attempts to investigate a more expansive consciousness, provided one chooses to use that term, are misnomers of sorts simply because they imply being conscious from a center, a me. Whereas in terms of a consciousness which might transcend what we assume to be our usual, limited definitions of intelligence, and for that matter any sort of artificial intelligence, there is no center. In point of fact, that might be the very definition of a larger intelligence and more expansive consciousness; that it doesn’t operate from a center. On the other hand, A.I.s, it would seem, will always be doomed to operate through a center simply because they have no capacity to conceive of a world that might exist outside their own activities. In addition, of course, they have no capacity for the intuitive moment of direct insight that comes into existence when the emotive and cognitive worlds have become one.

Jaan Tallinn might indeed be right to worry about how A.I. machines could one day take control of our daily lives in ways that are hazardous to our continued existence as a species. Yet at the same time, there might be a threat to our continued evolution as a species which could be just as dangerous. That is, if digital machines that are part of the world of artificial intelligence begin to dull both our cognitive and emotive capacities in ways that prevent us from searching for a larger consciousness and a more insightful intelligence. That is, diminishing things like our working memories, our capacity for creative absorption, or our potential for a highly receptive inner life. For if that begins to occur, our ultimate purpose as humans, the search for that something larger, might begin to likewise vanish.

«
»

• Learning Not Schooling

• Our Results-Driven, Testing Culture

• How to Prepare Students for the Information Age and Global Marketplace

• Creative Learning for the
Information Age

• Intelligence in the Digital Age

• Book Reviews and Comments

• Recent Articles

• Reading List

• Plays and Events

Recent Blog Posts

  • One Possible Unimagined Genesis of Coronavirus
  • Our Potentially Disembodied World
  • Intelligence in the Digital Age
  • Why the TV/Internet Comparison Just Doesn’t Work
  • The Possible Unreality of the Digital World
Recent Comments

    Blog Categories

    Subscribe

    Enter your email address below to subscribe to my newsletter.

    Site by: SmartAuthorSites ... Websites for Authors