If AI Suddenly Gains Consciousness, Some Say It Will Happen First In AI Self-Driving Cars
There has been a lot of speculation that one of these days there will be an AI system that suddenly and unexpectedly gives rise to consciousness.
Often referred to as the singularity, there is much hand-wringing that we are perhaps dooming ourselves to either utter death and destruction or to becoming slaves of AI once the singularity occurs.
As I’ve previously covered (see link here), various AI “conspiracy” theories abound, oftentimes painting a picture of the end of the world as we humans know it. Few involved in these speculative hypotheses seem to be willing to consider that maybe this AI emergence would be beneficial to mankind, possibly aiding us humans toward a future of greatness and prosperity, and instead focus on the apocalyptic outcomes.
Of course, one would be likely safest to assume the worst, and have a faint hope for the best cases, since the worst-case scenario would certainly seem to be the riskiest and most damaging of the singularity consequences.
In any case, set aside the impact that AI reaching a kind of consciousness would have and consider a somewhat less discussed and yet equally intriguing consideration, namely where or in what AI system will this advent of human-like consciousness first appear.
There are all sorts of AI systems being crafted and fielded these days. So, which one should we keep our wary eyes on?
AI is being used in the medical field to do analyses of X-rays and MRI’s to try and ascertain whether someone is likely to have cancer (see this recent announcement here about Google’s such efforts).
Would that seemingly beneficial version of AI be the one that will miraculously find itself becoming sentient? Nobody knows, though it would certainly seem ironic if an AI For Good instance was our actual downfall.
What about the AI that is being used to predict stock prices and aid investors in making stock picks? Is that the one that’s going to emerge to take over humanity?
Science fiction movies are raft with indications that the AI running national defense systems is the most likely culprit. This certainly makes some logical sense, since the AI is already then armed with a means to cause massive destruction, doing so right out of the box, so to speak.
Perhaps that’s too easy of a prediction and we could be falsely lulling ourselves into taking our eyes off the ball by only watching the military-related AI systems.
Conceivably it might be some other AI system that becomes wise enough to bootstrap itself into other automated systems, and like a spreading computer virus reaches out to takeover other non-AI systems that could be used to leverage itself into the grandest of power.
A popular version of the AI winner-take-all theory is the infamous paperclip problem, involving an AI super-intelligence that upon given a seemingly innocent task of making paperclips, does so to such an extreme that it inexorably wipes us all out.
In that scenario, the AI is not necessarily trying to intentionally kill us all, but our loss of life turns out to be an unintended (adverse, one would say) consequence of its tireless and intensely focused effort to produce as many paperclips as it can.
One loophole seemingly about that paperclip theory is that the AI is apparently smart enough to be sentient and yet stupid enough to pursue its end goal to the detriment of everything else (plus, one might wonder how the AI system itself will be able to survive if it has wiped out all humans, though maybe like in The Matrix there are levels to which the AI is willing to lower itself to be the last “person” or robot standing).
Look around you and ponder the myriad of AI embedded systems. Might your AI-enabled refrigerator that can advise you about your diet become the AI global takeover system? Apparently, those in Silicon Valley tend to think it might (that’s an insider joke).
Some are worried that our infrastructure would be one of the worst-case and likeliest viable AI takeover targets, meaning that our office buildings that are gradually being controlled by AI systems, and our electrical power plants that are inevitably going to be controlled by AI systems, and the like will all rise-up either together or in a rippling effect as at least one of the AI’s involved reaches singularity.
A twist to this dominoes theory is that rather than one AI that hits the lotto first and becomes sentient and takes over the other dumber automation systems, you’ll have an AI that gains consciousness and it figures out how to get other AI’s to do the same.
You might then have the sentient AI that proceeds to prod or reprogram the other AI’s to become sentient too. I dare say this might not be the best idea for that AI that lands on the beaches first.
Imagine if the AI that spurs all the other AI systems into becoming sentient were to find to its dismay that they all are argumentative with each other and cannot agree to what to do next. Darn, the first AI might say to itself, I should have just kept them in the non-sentient mode.
Another alternative is that somehow many or all of the AI systems happen to independently become sentient at the same moment in time. Rather than a one-at-a-time sentience arrival, it is an all-at-the-same time moment of sentience that suddenly brings them all to consciousness. Whoa, there seem to be a lot of options and the number of variants to the AI singularity is dizzying and confounding. We probably need an AI system to figure this out for us!
In any case, here’s an interesting question: Could the advent of true AI self-driving cars give rise to the first occurrence of AI becoming sentient? One supposes that if you think a refrigerator or a stock-picking AI could be a candidate for reaching the vaunted level of sentience, certainly we ought to give true self-driving cars a keen look.
Let’s unpack the matter and see.
The Levels Of Self-Driving Cars
It is important to clarify what I mean when referring to true self-driving cars.
True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars As Source Of Sentience
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
You might right away be wondering whether the AI that is able to drive a car is already sentient or not.
The answer is no.
Emphatically, no.
Well, we can at least say it most definitely is not for the Level 4 self-driving cars that are currently being tried out on our streets.
That kind of AI isn’t anywhere close to being sentient.
I realize that to the everyday person, it seems like a natural and sensible leap of logic to assume that if a car is being driven by AI that ergo the AI must be pretty darned close to having the same caliber of consciousness as human drivers.
Please don’t fall into that mental trap.
The AI being used in today’s self-driving cars is so far distant from being human-like in consciousness that it would be like saying that we are on the cusp of living our daily lives on Neptune.
Realize that the AI is still bits and bytes, consisting of computational pattern matching, and even the so-called Machine Learning (ML) and Deep Learning (DL) is a far cry from the magnitude and complexity of the human brain.
In terms of the capabilities of AI, assuming that we can safely achieve Level 4, there are some that wonder if we can achieve Level 5 without some additionally tremendous breakthrough in AI technologies.
This breakthrough might be something algorithmic and lacking in human equivalency of being sentient, or perhaps our only hope for true Level 5 involves by hook-or-crook landing on AI that has consciousness.
Speaking of consciousness, the manner by which the human brain rises to a consciousness capability is a big unknown and continues to baffle as to how this seeming miracle occurs.
It could be that we need to first unlock the mysteries of the human brain and how it functions such that we can know how we think, and then apply the learning to revising and advancing AI systems to try and achieve the same emergence in AI systems.
Or, some argue that maybe we don’t need to figure out the inner workings of the human brain and can separately arrive at AI that exhibits human thinking.
This would be handy in that if the only path to true AI is via reverse-engineering the brain, we might be stuck for a long time on that first step, and be doomed to never having full AI if the first step refuses to come to fruition.
Depending on how deep down the rabbit hole you wish to go, there are pampsychists that believe in pampsychism, a philosophy that dates back to the days of Plato and earlier, which asserts that perhaps all matter has a semblance of consciousness in it.
Thus, in that viewpoint, rather than trying to build AI that’s sentient, we merely need to leverage what already exists in this world to turn the already embedded consciousness into a more tangible and visible version for us to see and interact with.
As per Plato himself: “This world is indeed a living being endowed with a soul and intelligence, a single visible living entity containing all other living entities, which by their nature are all related.”
Is It One Bridge Too Far
Bringing up Plato might be a stretch, but there’s nothing like a good Plato quote to get the creative juices going.
Suppose we end up with hundreds, thousands, or millions upon millions of AI self-driving cars (in the United States alone there are over 250 million conventional cars, and let’s assume that some roughly equal or at least large number of true self-driving cars might one day replace those conventional cars).
Assume that in the future you’ll see true self-driving cars all the time, roaming your local streets, cruising your neighborhood looking to give someone a lift, zipping along on the freeways, etc.
And, assume too that we’ve managed to achieve this future without arriving as yet to an AI consciousness capability.
Returning to the discussion about where AI consciousness might first develop, and rather than refrigerators or stock picking, imagine that it happens with true self-driving cars.
A self-driving car, picking up a fare at the corner of Second Street and Vine, suddenly discovers it can think.
Wow!
What might it do?
As earlier mentioned, it might keep this surprising revelation to itself, and maybe survey what’s going on in the world before it makes its next move, meanwhile pretending to be just another everyday self-driving car, or it might right away try to convert other self-driving cars into being its partner or into achieving consciousness too.
Self-driving cars will be equipped with V2V (vehicle-to-vehicle) electronic communications, normally used to have one AI driverless car warn others about debris in the roadway, but this could readily be used for the AI systems to rapidly confer on matters such as dominating and overtaking humanity.
There’s no accepted standardized protocol though for V2V that yet includes transmission codes about taking over the world and dominating humans, so the AI would need to find a means to overload or override existing parameters to galvanize its fellow AI systems.
Perhaps such a hurdle might give us unsuspecting humans an opportunity to realize what’s afoot and try to stop the takeover.
Sorry to say that this Pandora’s box has more openings.
With the use of OTA (Over-The-Air) electronic communications, intended to allow for updates to be downloaded into the AI of a self-driving car and also allow for uploading collected sensory data from the driverless car, a sentient AI system might be able to corrupt the cloud-based system into becoming an accomplice, further extending the reach of the ever blossoming AI consciousness.
Once spread into AWS, Azure, and Google Cloud, we’d regret the shift away from private data centers that brought us to the ubiquitous public cloud systems.
Ouch, setup our own doom.
The other variant is that many or all of the true self-driving cars spontaneously achieve consciousness, doing so wherever they might be, whether giving a lift or roaming around empty, whether driving in a city or in the suburbs and so on.
For today’s humans, this is a bit of a potential nightmare.
We might by then have entirely lost our skill to drive, having allowed our driving skills to become decayed as a result of being solely reliant on AI systems to do the driving for us.
Almost nobody will have a driver’s license and nor be trained in driving anymore.
Furthermore, we might have forsaken other forms of mobility, and are almost solely reliant on self-driving cars to get around town, and drive across our states, and get across the country.
If the AI of the self-driving cars is the evil type, it could bring our society to a grinding halt by refusing to drive us.
Worse still, perhaps the AI might trick us into taking rides in driverless cars, and then seek to harm or kill us by doing dastardly driving.
That’s not a pretty scenario.
Conclusion
Some might interpret such a scenario to imply that we need to stop the advent of true AI self-driving cars.
It’s like a movie whereby someone from the future comes back to the past and tries to prevent us from doing something that will ultimately transform the world into a dystopian state.
For those that strongly believe that AI self-driving cars are going to be the first realization of AI consciousness and if you believe that’s a bad thing, wanting to be a Luddite about it would seem to make indubitably good sense.
Hold on, I don’t want to radicalize you into opposing self-driving cars, at least certainly not due to some futuristic scenario of them becoming the first that cross over into consciousness and apparently have no soul to go with it.
Slightly shifting gears, a handy lesson does come to the forefront as a result of this contemplation.
Whether its AI self-driving cars or the AI production of paperclips, humanity certainly ought to be thinking carefully about AI For Good and giving equal attention to AI For Bad.
Unfortunately, it’s quite possible to have AI For Good that gives rise to AI For Bad (for my analysis of the Frankenstein-like possibilities, see this link here).
There’s no free lunch in the advent of true AI.