The Risk of Artificial Intelligence?
People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They’re concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, does this mean you probably should be too? While no doubt interesing men, are they really competent when it comes to programming?
I have tinkered with AI since the early 1970s. There is no doubt these guys are influenced by concepts like in the movies Terminator and the Matrix. But from a real world programming side, to outdo human thinking is easy. A computer model can far surpass humans in so many ways. What we have done in finance is unparalleled. Such technology could be developed in medicine eliminating the personal opinion of what a doctor “thinks”. My mother’s sister died of a leaky heart valve. Her doctor never noticed. One day, a intern worked in his office. He listened to her heart and said you have a problem. It was too late. A computer that could do medically what Socrates is doing economically would save a lot of medical costs and lives. That was one of my interests in turning to that after I was finished coding Socrates.
But the fears running around about AI are really unfounded. They are based upon a THEORY that somehow consciousness will emerge from creating a program. Not that someone codes this development, it somehow just is born. I will never say ABSOLUTELY NO WAY, for this is a untested theory. It assumes that somehow man could create a soul so to speak. I just do not believe that.
Absolutely every step has to be coded. How do you move your arm? The thought must first emerge in your brain, your mind must know what path to send an instruction down to move the muscles. There are countless paths. You have to decided the direction. Which precise muscle to move and how. There is a tremendous amount of coding that would be required.
I am pretty good at programming. This is all conceptual design. You first have to see it in your mind and then figure a way to code it to accomplish that end goal. This is not eary stuff. Sometimes you have to stare so hard at the problem and suddenly you see through it like a pane of glass – ah the path. Then the coding. The debugging is enough to drive you insane. The slightest most subtle error can take days to find and then you feel so stupid for it was in plane sight all the time. All of that requires the concept of how to accomplish a task. But how do you create emotion? That is different. Now you are talking about the freedom to just act arbitrarily. Of I could mimic a random thought generator seeded with the timer taking the last digit of the second. But that only creates the appearance of randomness. In programming, it is IMPOSSIBLE to create a true random generator for you quickly discover, whatever the project, it will fall back into as cycle for pure randomness cannot be coded.
I do not even know where to begin to try to create REAL human emotion since it is impossible to create randomness. I can mimic human emotion. You will get to see some of that in the final launch of Socrates. He can even joke. If you want to buy something that will decline sharply and is nowhere close to reaching a low in time or price, he can even come back and ask – Are you really sure? Did you have a bad day? But this is simply mimicking human nature. It is not creating it.
There is a substantial difference between actual random thought and mimicking since the first I cannot create, while the latter is a piece of cake. I would not know where to begin to create true emotional random thought since it is impossible to create even a simple random number generator. This is extremely important. For computers to turn against mankind as in the Terminator series or the Matrix, it requires emotion from which a random decision is made – like no I had a nasty day and suddenly I decided I do not like you.
I can create a self-aware system that will protect itself. No problem! I can create a system that will self-destruct or even defend itself with an electric charge – no problem. All that can be accomplished with writing code. I can even give a computer the ability to see as well as speak. That is no problem. Police already have facial recognition software. A computer can know who you are when you enter a room. While all of that may be food for Sci-Fi movies, but it is not the type of computer that will turn against its creator! If the government wants to create robots to kill man on command or create an army, that is no problem. It does not take free will to do that. Soldier are train to obey orders and NOT to question authority. Police are the same.
Government does not want independent thought – they do not even want intelligent police for the same reason Stalin kill intellectuals. Government wants mindless and emotionless drone. There machines can be lethal for it is the LACK or emotion of randomness. There is no compassion or sense of guilt. In that respect, a machine is tenacious and will complete its task. That is not AI that we need to fear and those in power would never want a machine with the capability of human emotion for then it could turn against its creator.
Where machines could be lethal killing machines that I could even create with the ability to survive and decide which street to go down independently, I have no idea how to create that ability of randomness that is essential for emotion. Without that core essence of humanity, it could never turn against it creator. As for this THEORY that somehow consciousness would just emerge on its own? Well its a theory. So is honest government.
Reprinted from Armstrong Economics.
Leave a Reply