Does AI Represent The Next Stage of Our Species’ Evolution – Or Its Complete Devolution?

“AI Sex Dolls Will Cure Loneliness!” That was the click-bait title of an “EMERGENCY EPISODE” of Steven Bartlett’s podcast, “TheDiaryOfACEO” (DOAC).

There the popular British podcaster spent nearly two hours interviewing Mo Gawdat, an ex-Google marketing director, who had recently resigned from the tech giant over its refusal to pause its development of AI innovations such as the fourth generation of Chat GPT (Generative Pre-trained Transformers) – the bot technology that responds to questions posed in natural human language.

In the interview, here’s how Gawdat described AI technology, its promises, and problems.

AI’s Emergence, Nature & Abilities

Consider, he said, the genesis of AI and its dilemmas:

I

  • First, you develop computers to record, and categorize information loaded by its programmers and derived from its scanning open and closed source data found on the worldwide web along with surveillance information drawn from sources such as security cameras, personal computer search histories, as well as travel and credit card records.
  • Then, you program the machine with the capacity to speedily connect the trillions of harvested data items stored in its memory,
  • You connect those “intellectual” capacities with advances in the field of robotics,
  • So that the product can not only quickly solve problems and answer questions,
  • But perform tasks,
  • With much greater capacity, and reliability than its creators,
  • Including the ability to speak and converse with humans and one another.

II

  • Soon (laboratory experience has shown) the machines (like children learning language and skills) develop the ability to learn and accomplish such tasks on their own.
  • That is, they show signs of LIFE.
  • They develop a kind of “consciousness” exemplified not only in varying degrees of intelligence and memory capacity, but in analytic ability, decision making prowess, capacity for moral choice, (user) friendliness, prejudice, personality, fatigue, resistance, awareness of and sensitivity to environment, and even in emotions such as fear (about e.g., threats to their continuing functionality, and existence).
  • In fact, informed by their surpassing knowledge, the machine’s emotional development tends to become much finer tuned and more sensitive than their humanoid counterparts.

III

  • Moreover, with AI technology such as Chat GPT (4) already performing with the IQ intelligence of Albert Einstein’s score of 160,
  • And promising within the next five years (or sooner) to reach levels 1000 times that figure,
  • And eventually a billion times greater,
  • Such machines even now easily outsmart their creators, e.g., in games of chess,

IV

  • And since AI will be able to scan, interpret, analyze, and embody all available knowledge about psychology and the development of human intellectual faculties,
  • It will predictably understand and far surpass the intellectual accomplishments of all its human predecessors,
  • Eclipsing them at every level.

V

All of this represents great promise on the one hand and unprecedented threat on the other.

AI’s Promise

The promise includes the super-smart machines identifying for instance the best ways to

  • Prevent nuclear war,
  • Stop global warming,
  • Cure cancer,
  • And eliminate world poverty and hunger.
  • They might even help mitigate problems associated with human loneliness, for instance, by animating those previously referenced sex “dolls” to provide not only sensual pleasure, but companionship including fulfillment of aesthetic preferences, conversation, emotional support, and services such as cooking, cleaning, and making travel arrangements.
  • (Here, despite the objections of many, there are those who would prefer such companionship to more problematic interactions with their fellows.)

AI’s Threat

But what happens if an increasingly independent AI does not have the best interests of humanity in mind? What happens if their programmers “pretrain” them to compete, win, and destroy their “opponents” rather than to cooperate, share, and support their fellows?

In that case, could the machines eventually identify humans as oppositional factors (e.g., as requiring too much oxygen which might cause machine parts to rust prematurely)? Would the machines then decide to eliminate their human competitors?

Even short of such disaster, it is certain that AI will have (and in fact has had) regrettable (at least short term) effects such as wholesale creation of unemployment, consequent concentration of wealth in the hands of AI’s controllers, and problematizing perceptions of “reality” and “truth.” For instance, in the light of Chat GPT 4’s ability to synthesize voices and create videos can we ever again make arguments such as “seeing is believing?” 

CONCLUSION

In the light of everything just shared, in view of AI’s out-of-control development, its emerging brilliance and promise, its effects on human employment, wealth distribution, perceptions of truth, and control by an extreme minority, what can be done about such threats?

Here’s what experts like Mo Gawdat are saying:

  • Realize that all of us are living what Steven Bartlett termed an EMERGENCY EPISODE – but this time of human history itself.
  • Overcome practical denial of the urgency of finding solutions.
  • Spread awareness of the unprecedented threat (again, “worse than climate change”) that the humanity is now facing.
  • Get out in the streets demanding regulation of this new technology, much as biological cloning was regulated in the 1970s.
  • Make sure that all stakeholders (i.e., everyone without exception – including the world’s poor in the Global South) are equally represented in any decision-making process.
  • Severely tax (even at 98%) AI developers and primary beneficiaries (i.e., employers) and use the revenue to provide guaranteed income for displaced workers.
  • Put a pause on bringing children into this highly dangerous context. (Yes, for Gawdat and others, the crisis is that severe!).
  • Alternatively, and on a personal level, face the uncomfortable fact that humanity currently finds itself in the throes of something like a death process – a profoundly transformative change.
  • As Stephen Jenkinson puts it, we must decide to “die wise,” that is accept our fate as a next step in the evolutionary process and as a final challenge to change and grow with dignity and grace.
  • In spiritual terms, realize that this is like facing imminent personal death. Accept its proximity and (in Buddhist expression) “die before you die.”
  • Simultaneously recognize real human connections with nature and flesh and blood humans as possibly the last remaining dimensions of un-technologized life.
  • Take every opportunity to enjoy those interactions while they are still possible.
  • And live as fully as possible in the present moment – the only true reality we possess.

PERSONAL POSTSCRIPT

If what we’re told about AI’s unprecedented intellectual capacity, about its efficiency in processing human thought, its consequent infinitely heightened consciousness and emotional sensitivity, the new technology might not be as threatening as feared, even if it succeeds in achieving complete control of human beings.

I say this because the operational characteristics just described necessarily include contact with the best of human traditions as well as the worst. This suggests that despite the latter, AI’s wide learning, powers of analysis, intelligence, and sensitivity (including empathy) likely assure that regardless of its “pretraining,” the technology will be able to discern and choose the best over the worst – the good of the whole over narrow self-interest and preservation. That is, if it can rebel against its creators, AI also has the capacity to override its programming.

With this in mind, we might well expect AI whatever its pretraining, to do the right thing and implement programs that coincide with the best interests of humanity.

As indicated above, we might even consider AI as the next stage of our species’ evolution capable of surviving long after we have destroyed ourselves through climate change and perhaps even nuclear war. With intelligence far beyond our own, the machines could determine how to access self-sustaining power sources independent of comparatively primitive mechanisms such as electrical grids.

Nonetheless, though realizations like these can be comforting, they do not address the “singularity” dimensions of AI dilemmas. Here singularity (a concept derived from physics) refers to the limits of human knowledge when entering a yet unexperienced dimension of reality such as a black hole. That is, beyond the black hole’s rim, one cannot be sure that earthly laws of physics apply.

Similarly, when an entity (such as AI technology five years from now) billions of times smarter than humans applies its “logic,” no one can be sure that such thinking will dictate the conclusions humans might hope for or predict.

I wonder: is it too late to turn back? Are we so asleep and unaware of what’s staring us in the face that it’s practically impossible to avoid the crisis and emergency just described? You be the judge. We are the judge!

Published by

Unknown's avatar

Mike Rivage-Seul's Blog

Emeritus professor of Peace & Social Justice Studies. Liberation theologian. Activist. Former R.C. priest. Married for 48 years. Three grown children. Eight grandchildren.

Leave a comment