Wednesday, March 8, 2017

Stephen Hawking Has Not Yet Said He Accepts Astrology or Existence of the Mothmen

In many other respects, Stephen Hawking spends a great deal of time barking at the Moon and he doesn't disappoint today.  Someday perhaps he will accept the Mothmen.  (RT:  Stephen Hawking calls for ‘world government’ to stop robot apocalypse)



Professor Stephen Hawking has pleaded with world leaders to keep technology under control before it destroys humanity.

In an interview with the Times, the physicist said humans need to find ways to identify threats posed by artificial intelligence before problems escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” the scientist said.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war.  We need to control this inherited instinct by our logic and reason.”

- RT

Wings over the World was the organization fronted by Raymond Massey in the 1936 sci fi movie, "Things to Come," and the premise behind WoW was the new age of Man would come based on science and reason.  (WIKI:  Things to Come)


So far, Hawking's pitch is general common sense insofar as it's best if we don't get radioactive and ho hum to that when obviously we don't.  However, it gets better.

Hawking added that the best solution would be “some form of world government” that could supervise the developing power of AI.

“But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges,” he added.

- RT

Um, Steve-O, what is there about your one-world government which gives even the tiniest scrap of confidence such an organization would be more responsible with such technology than any other.  You will need another Higg's Boson or two for making this program work.


Don't worry as the cat is on a roll.

“A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

- RT

How about that for wtf, mates?  Did you see evil ant-haters coming?


A wee bit more reason from Elon Musk:

“I think we should be very careful about artificial intelligence,” Musk said during the 2014 AeroAstro Centennial Symposium

“If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

- RT

Yep, El, very foolish indeed, I imagine.  How do you like ants?

No comments: