Saturday, April 22, 2017

Exceptional Example of A.I. and Computer Learning in Finale for "Wargames"

"Wargames" was likely made some time in 1983 and is another on the theme of computers getting out of control.  It comes from the line of "Colossus," "The Forbin Project," etc in which omnipotent computers do unpredicted things.

The question is how to get the WOPR from initiating and thermonuclear war which it has just judged necessary and the answer to that is remarkable to this day, particularly when so many don't get it even now, all these years later.





In effect, Matthew Broderick is trying to get WOPR to teach itself war has no win since there was nothing of that nature programmed into it.  The computer had not previously had any awareness of the futility of engaging in such a war when the only outcome is that everyone dies.


This is exactly the same consideration relative to Asimov's Three Laws of Robotics which mandate no robot shall ever hurt nor allow to be hurt any human.  The current drive to A.I. in general seems to be one of leaving it to run wide open to discover whatever it learns but, as we can see from MSM and the ease with which it omits vast swaths of knowledge, it's just as easy for a robot to form false conclusions when its sources are distorted in that way.

Asimov wrote a number of stories on the hazards of unrestrained robotics and almost all of that has been ignored, as is usually the way with whizkids.  However on this one they have made a profound mistake.


Many bleeding hearts dance about with the idea of robos as artificial humans but they're not; they're fucking machines.  Failing to install the logical constraints within them to ensure they understand that and stay that way invites a world of chaos beyond maybe even Asimov's imagination and he had a huge imagination.

My motivation in this matter is not Luddism since I was deeply-engaged with large-scale computers from the late Seventies until 2008.  Through all that time, the survival perspective was computers are just machines; it's just a big pile of tin.  If you let the machine think it's any more than that then you will spend every day in a cold sweat and I have seen sysfrogs soaked from head to foot trying to fix a problem they don't think they completely understand.

That problem comes as a result of a self-inflicted wound.  If you let bad code get to the production system then shame on you, Dagwood.  Too bad you're the only one who can fix it since you're the only one who knows what you changed.  Welcome to systems programming.

And that segues nicely back toward bleeding hearts and artificial intelligence.  Allowing insufficiently tested code into the production system will kill it and don't come around looking for hearts and flowers when that happens.  Isaac Asimov wrote the Three Laws of Robotics in the 1940s or possibly earlier.

Where are they now?

Let's ask WOPR, shall we?

No comments: