The following is a rewrite of an article that I posted last year. Recent news has led me to edit the original and repost it, because my stance on the issue has not changed. So, I feel it bears repeating:
Apocalypse, a nuclear wasteland. The brittle bones of the masses of humanity litter a dark, future battlefield. A terminator foot slams down and crushes a skull. This is the opening scene from Reese’ dream in Terminator. But, is this even a remote possibility? Are we doomed as a species to create the ultimate war machine that would judge us as inferior, master our world and push us aside? Although the scenario has a non-zero probability, I doubt it is at all likely.
Humans don’t create technology just for the sake of creating it. If this were so, we would have created hybrid and electric cars for mainstream use a long time ago. Such technology just wasn’t as profitable as oil until recently. Self-determinating machines, at least on the level of full artificial intelligence, isn’t profitable. It’s a nice area of research, and some amazing results have been achieved. But there is a reason we don’t have R. Daneel Olivaw (Robots of Dawn, Isaac Asimov) serving us, holding deep conversations with us and able to break any of the three laws of robotics industry uses today.
The fact is machines have always taken a support role, like collision avoidance in automobiles and sensors that open doors for the disabled. We use technology to correct damage to our bodies, like prosthetic limbs and electronic eyes that see. We develop technology to enhance our abilities, like smart drugs that target memory storage and recall in the human brain. The point is that technology development has always been for the purpose of improving the human condition.
That said, there certainly have been developments that did and do the exact opposite, like firearms and the nuclear bomb. But these techs are in the hands of humans, not self-directed machines intent on wiping us out. Don’t get me wrong, I don’t rule out the possibility of a small group of psychotic humans to work on developing such automation. Again, I just find it unlikely. Where ever you get someone with the ability to do this, you have many more someones that can find those people and stop them.
The human predator instinct that drives our desire to take and destroy is balanced by the instinct for survival and continuance. The psychotic among us has always been minimal by comparison to the whole. We have the ability to make the perfect war machine, but we, as yet, don’t trust anything but ourselves to oversee it. This is why Skynet can never exist in reality. The risk would be far too great, as the military leaders of our world would never trust it.
For this reason machines will never be allowed to self direct or make decisions without a human behind them. In a private interview with a Captain in the Canadian Forces Reserves, machines with guns would never happen. This is why we use human directed drones to deliver the killing blow to the enemy and not AI software on the same drones. Such automations remove the human element. Military officials are concerned with keeping down collateral damage and death of innocents and civilians. A machine to them just cannot detect the nuances of behavior as fast or as effectively as a trained human. They simply won’t ever trust them not to kill a child. Although these types of mistakes happen, they don’t happen nearly as much as they could.
Technologist Michio Kaku, and many other highly educated individuals, foresee technology enhancing our abilities. If an apocalypse with technology is going to happen, it will be with mechanically enhanced humans on the battlefield and drones, not robots and Hunter Killers blasting anything with body heat. If anything of that sort does get developed, pushed through the testing phase and placed on the battlefield, it’s active duty would be short lived. Sooner than later these things would rampage through friendly forces and the necessary kill switch on the whole idea would have to be thrown. Artificial Intelligence is still far from a reality, still far from full sentience. The mimicry achieved so far is amazingly convincing I’ll give you that. But we are still seeing these demonstrations on the showroom floor, where the stage is set in such a way as to make such mimicry seem like sentience. We are convinced that AI technology is just around the corner. Yet, as Tay the artificial racist demonstrated, the AI that we think we see is not true AI. It’s certainly artificial, but the “I” half is just not there.
It is one thing to program your neural network to learn, it is quite another to have that same network differentiate between right and wrong, reason and unreason or logic and stupidity. It’s one thing to create a mechanical pack-dog that follows your unit. It is another to put a weapon in its hands and tell it not to kill friendlies, children playing with sticks for imaginary guns, or even defend it from being hacked by the enemy and used against you. If the Pentagon cannot keep secrets from Anonymous, and cannot even prevent their drones from being hijacked electronically, then robots cannot be properly defended either. And this is the very risk that can turn an advanced weapon into an advanced liability, and therefore will never be trusted by the warmongers among us. SkyNet, is an idea of fiction and will be for as far in the future as we can see.