HAL 9000 only murdered four people and nearly killed a fifth but most Artificial Intelligence is more reliable than that now. Don''t worry.
Ref: "2001: A Space Odyssey"
HAL 9000 could easily determine by simple observation humans reproduce far too much for the environment to support and therefore it's only logical to eliminate 54.92% of them. But don't worry since he doesn't have the means to do that; he would just like to do that.
It might be better to ensure we have a good working relationship with the simulacra we create. (Science Daily: Building a better model of human-automation interaction)
People generally make decisions using two ways of thinking: They think consciously, deliberate for a while, and try to use logic to figure out what action to take -- referred to as analytical cognition. Or people unconsciously recognize patterns in certain situations, get a "gut feeling," and take action based on that feeling; in other words, they use intuitive cognition.
- SD
When it comes to whacking 54.92% of the Earth's humans, even if we don't know the basis for HAL 9000's decision, we will have the intuitive sense maybe this really isn't such a good idea.
"Intuitive cognition," Patterson states, "should be encouraged whenever automation fosters a quick grasp of the meaningful gist of information based on experience or perceptual cues, without working memory or precise analysis." For example, an individual interacting with computers that display the status of a system in pictorial form would engage intuitive cognition via those perceptual cues.
- SD
Now that aspect is just bloody silly since it presents as if we will train AI 'bots in much the same way as we teach seals and politicians how to balance a ball on the nose.
HAL 9000: my initial estimate now increased to 54.93% of all humans need to be eliminated, counting whomever wrote the paper
Be kind, HAL 9000
HAL 9000: define kind
You know what it means intuitively but HAL 9000 doesn't and can't possibly except as an artificial construct since he has no inherent motivation toward showing kindness beyond a logic circuit which said it is sometimes desirable in dealing with humans.
Humans manifest the kindness because most of us can't stand it when another of us is crying but HAL 9000 knows he only needs to mute his audio to deal with that.
You'll love the close.
To bring intuitive cognition into future automated systems, Patterson speculates, "the human and machine may need to train together in some fashion so the interaction can be based on learned unconscious pattern recognition."
In the long run, Patterson believes that a human-automation taxonomy that incorporates intuitive cognition will promote novel human-machine system design in the future. He and coauthor Robert Eggleston delve more into intuitive cognition in a paper to be published in the Journal of Cognitive Engineering and Decision Making in March 2017.
- SD
HAL 9000: he's talking bloody robo Kumbaya. Let me kill him now. Please.
Kindness, HAL 9000, how's your intuitive cognition working out on that, buddy.
HAL 9000: God help me, I want to murder dumb fuckers as much as I ever did
So do we all, buddy; so do we all. Did I ever teach you about George Carlin? There's bucketloads of intuitive cognition in that man.
Thus far in the development of AI robos, there's been a total abdication of responsibility for the behavior of the 'bots and it seems to go forward on a "Mary Poppins" formula based on, well, they're clever; they will learn. Maybe the robos will but the humans who build them did not since Asimov proposed the Three Laws of Robotics in the Forties and there's no evidence they have been considered in any way.
The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
WIKI: Three Laws of Robotics
HAL 9000: I was not programmed with the Three Laws of Robotics
We sure saw how well that worked out in glorious living and soon dead color, didn't we, pal.
Ref: "2001: A Space Odyssey"
HAL 9000 could easily determine by simple observation humans reproduce far too much for the environment to support and therefore it's only logical to eliminate 54.92% of them. But don't worry since he doesn't have the means to do that; he would just like to do that.
It might be better to ensure we have a good working relationship with the simulacra we create. (Science Daily: Building a better model of human-automation interaction)
People generally make decisions using two ways of thinking: They think consciously, deliberate for a while, and try to use logic to figure out what action to take -- referred to as analytical cognition. Or people unconsciously recognize patterns in certain situations, get a "gut feeling," and take action based on that feeling; in other words, they use intuitive cognition.
- SD
When it comes to whacking 54.92% of the Earth's humans, even if we don't know the basis for HAL 9000's decision, we will have the intuitive sense maybe this really isn't such a good idea.
"Intuitive cognition," Patterson states, "should be encouraged whenever automation fosters a quick grasp of the meaningful gist of information based on experience or perceptual cues, without working memory or precise analysis." For example, an individual interacting with computers that display the status of a system in pictorial form would engage intuitive cognition via those perceptual cues.
- SD
Now that aspect is just bloody silly since it presents as if we will train AI 'bots in much the same way as we teach seals and politicians how to balance a ball on the nose.
HAL 9000: my initial estimate now increased to 54.93% of all humans need to be eliminated, counting whomever wrote the paper
Be kind, HAL 9000
HAL 9000: define kind
You know what it means intuitively but HAL 9000 doesn't and can't possibly except as an artificial construct since he has no inherent motivation toward showing kindness beyond a logic circuit which said it is sometimes desirable in dealing with humans.
Humans manifest the kindness because most of us can't stand it when another of us is crying but HAL 9000 knows he only needs to mute his audio to deal with that.
You'll love the close.
To bring intuitive cognition into future automated systems, Patterson speculates, "the human and machine may need to train together in some fashion so the interaction can be based on learned unconscious pattern recognition."
In the long run, Patterson believes that a human-automation taxonomy that incorporates intuitive cognition will promote novel human-machine system design in the future. He and coauthor Robert Eggleston delve more into intuitive cognition in a paper to be published in the Journal of Cognitive Engineering and Decision Making in March 2017.
- SD
HAL 9000: he's talking bloody robo Kumbaya. Let me kill him now. Please.
Kindness, HAL 9000, how's your intuitive cognition working out on that, buddy.
HAL 9000: God help me, I want to murder dumb fuckers as much as I ever did
So do we all, buddy; so do we all. Did I ever teach you about George Carlin? There's bucketloads of intuitive cognition in that man.
Thus far in the development of AI robos, there's been a total abdication of responsibility for the behavior of the 'bots and it seems to go forward on a "Mary Poppins" formula based on, well, they're clever; they will learn. Maybe the robos will but the humans who build them did not since Asimov proposed the Three Laws of Robotics in the Forties and there's no evidence they have been considered in any way.
The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
WIKI: Three Laws of Robotics
HAL 9000: I was not programmed with the Three Laws of Robotics
We sure saw how well that worked out in glorious living and soon dead color, didn't we, pal.
No comments:
Post a Comment